Could part of the problem be the size of the problems? If we are thinking about things 100 years ago, they were probably much smaller scale. A local problem in your town or community might be bad but you can realistically go about fixing it. The local church has a hole in the roof, you pass around a collection and local handymen fix it. There is a loud pub that wakes people up every night, that could be have quiet hours enforced. Problems now might be much bigger; global warming, Russian nukes, the EU breaking encryption across a whole continent. While it's easy to organise and change something locally, lager problems (or problems in another part of the world which we now know about) are difficult, if not impossible for people to solve. This could give more of a sense of helplessness and hopelessness.
Yeah, that seems plausible to me! Of course, we've had big problems for a long time, but maybe combined with the fact that news about them is becoming more accessible (or omnipresent) through various information-age channels could explain why this would lead to these things looming larger for more people. I like that point.
"Upshot: Even people who realize that conversation-topics aren’t representative will still be led to (some) excess pessimism by just how negative conversations tend to be."
The implication as I read it is that teaching people about the over-representativeness of negative narratives is a fool's errand, since even Bayesians will still have an overly-negative view of the world.
My response is that we know focusing on the negative has an evolutionary basis and some amount of excess pessimism is good. The ideal (not most accurate) estimate of positive news is likely lower than the true value of 90%. While it's true that even Bayesians are still overly-pessimistic, it could be that their prior knowledge is the antidote needed to combat the worst effects of extreme pessimism caused by social media. Beyond a certain point of pessimism, the solution changes from "let's work on the bad" to "let's get rid of the system and start over". The rise of populism and illiberalism can be thought of as a function of the growth of this second type of thinking. Teaching people about the over-representativeness of negativity is the key to returning people to a "normal" amount of pessimism.
Yeah, I think I agree with this! I agree that it's too strong a conclusion to say, from this model, that teaching people about over-representativeness won't let them correct for it, or at least correct more for it. Really, what's driving the Bayesians' over-pessimism in this model is that they don't correct ENOUGH for the selection effect, so giving them evidence that it's bigger than they thought (or changing their mental model, so they leave open stronger-selection-effect possibilities) definitely seems desirable. And since real people aren't as rigid in their thinking and updating as these toy models of Bayesians, it definitely seems possible. Definitely not a fool's errand.
I'm a bit confused with the (non-social media) conclusion about pessimism, though, because the reason for the Bayesian pessimism is the improper guess of softmax precision, which if it were in error the other way, would make them overly optimistic rather than pessimistic. I don't see a clear argument for why real people's estimates of precision would be in error one way or the other, so it doesn't ring strongly to me
Thanks! Yeah, agreed, this is a weakness in the model. My basic thought is that (1) it's hard for people to estimate how much this is driving things, and (2) since the baseline observation that solving-problems-could-lead-to-negativity isn't (I think!) obvious to most people before thinking about it, the default person will be under-estimating its effect (so over-estimating randomness). Of course, that's a very bounded-rationality explanation, since maybe people are being irrational for not pricing this in.
I was also thinking it didn't matter *too* much, since it definitely seems too much to expect people to price in the social-media effects. And even if they properly estimate the individual effects, it'll be quite natural to under-estimate them in the social media context.
But anyways, I think you're right that this is only a limited-rationality explanation. I'm thinking it's some ways better than the simple "people completely ignore selection effects" explanation—more like, even if they're *somewhat* sophisticated in accounting for selection effects, its easy for under-estimation of them to still lead to pessimism.
That makes sense to me! I really like the "more options of topics leads to more negativity" piece of the demonstration. It feels pretty robust to model specification.
Great question. I'm not sure. Or rather: there are definitely models that could help explain that, but this one is so streamlined that I'm not sure it makes any helpful predictions on that front.
I guess if you really wanted to stretch it: if there's homophily amongst two groups A and B (so As are more connected to Bs and vice versa), and As are more likely to share the most-beneficial thing (they have more softmax precision), then As will share (and by homophily, see) more negative posts than Bs. So the models does predict that if you have homophily and differential degrees of problem-solving orientation, those who are more oriented toward it will get more negative.
So I suppose we could hypothesize that women are group A and men are group B...? Which maybe isn't nuts; but I give very low credence to this model being the right explanation for that.
Could part of the problem be the size of the problems? If we are thinking about things 100 years ago, they were probably much smaller scale. A local problem in your town or community might be bad but you can realistically go about fixing it. The local church has a hole in the roof, you pass around a collection and local handymen fix it. There is a loud pub that wakes people up every night, that could be have quiet hours enforced. Problems now might be much bigger; global warming, Russian nukes, the EU breaking encryption across a whole continent. While it's easy to organise and change something locally, lager problems (or problems in another part of the world which we now know about) are difficult, if not impossible for people to solve. This could give more of a sense of helplessness and hopelessness.
Yeah, that seems plausible to me! Of course, we've had big problems for a long time, but maybe combined with the fact that news about them is becoming more accessible (or omnipresent) through various information-age channels could explain why this would lead to these things looming larger for more people. I like that point.
Great post. My one critique is this quote:
"Upshot: Even people who realize that conversation-topics aren’t representative will still be led to (some) excess pessimism by just how negative conversations tend to be."
The implication as I read it is that teaching people about the over-representativeness of negative narratives is a fool's errand, since even Bayesians will still have an overly-negative view of the world.
My response is that we know focusing on the negative has an evolutionary basis and some amount of excess pessimism is good. The ideal (not most accurate) estimate of positive news is likely lower than the true value of 90%. While it's true that even Bayesians are still overly-pessimistic, it could be that their prior knowledge is the antidote needed to combat the worst effects of extreme pessimism caused by social media. Beyond a certain point of pessimism, the solution changes from "let's work on the bad" to "let's get rid of the system and start over". The rise of populism and illiberalism can be thought of as a function of the growth of this second type of thinking. Teaching people about the over-representativeness of negativity is the key to returning people to a "normal" amount of pessimism.
Yeah, I think I agree with this! I agree that it's too strong a conclusion to say, from this model, that teaching people about over-representativeness won't let them correct for it, or at least correct more for it. Really, what's driving the Bayesians' over-pessimism in this model is that they don't correct ENOUGH for the selection effect, so giving them evidence that it's bigger than they thought (or changing their mental model, so they leave open stronger-selection-effect possibilities) definitely seems desirable. And since real people aren't as rigid in their thinking and updating as these toy models of Bayesians, it definitely seems possible. Definitely not a fool's errand.
Love the approach of modeling!
I'm a bit confused with the (non-social media) conclusion about pessimism, though, because the reason for the Bayesian pessimism is the improper guess of softmax precision, which if it were in error the other way, would make them overly optimistic rather than pessimistic. I don't see a clear argument for why real people's estimates of precision would be in error one way or the other, so it doesn't ring strongly to me
Thanks! Yeah, agreed, this is a weakness in the model. My basic thought is that (1) it's hard for people to estimate how much this is driving things, and (2) since the baseline observation that solving-problems-could-lead-to-negativity isn't (I think!) obvious to most people before thinking about it, the default person will be under-estimating its effect (so over-estimating randomness). Of course, that's a very bounded-rationality explanation, since maybe people are being irrational for not pricing this in.
I was also thinking it didn't matter *too* much, since it definitely seems too much to expect people to price in the social-media effects. And even if they properly estimate the individual effects, it'll be quite natural to under-estimate them in the social media context.
But anyways, I think you're right that this is only a limited-rationality explanation. I'm thinking it's some ways better than the simple "people completely ignore selection effects" explanation—more like, even if they're *somewhat* sophisticated in accounting for selection effects, its easy for under-estimation of them to still lead to pessimism.
That makes sense to me! I really like the "more options of topics leads to more negativity" piece of the demonstration. It feels pretty robust to model specification.
And of course it does seem that social media is making us unhappier. But it also seems to impact younger women the most. Anyway we can model that?
Great question. I'm not sure. Or rather: there are definitely models that could help explain that, but this one is so streamlined that I'm not sure it makes any helpful predictions on that front.
I guess if you really wanted to stretch it: if there's homophily amongst two groups A and B (so As are more connected to Bs and vice versa), and As are more likely to share the most-beneficial thing (they have more softmax precision), then As will share (and by homophily, see) more negative posts than Bs. So the models does predict that if you have homophily and differential degrees of problem-solving orientation, those who are more oriented toward it will get more negative.
So I suppose we could hypothesize that women are group A and men are group B...? Which maybe isn't nuts; but I give very low credence to this model being the right explanation for that.