10 Comments

Wonderful post!

While I think I largely agree, I want to draw attention to something you mention only briefly. I don't think skepticism should only be used as a reductio. I think taking seriously the dynamics you're identifying *ought* to create a lot of pressure towards skepticism.

I was just reading about an exchange between economists Piketty, Saez, and Zucman (PSZ) on the one hand and Auten and Splinter (AS) on the other. It concerned the question of how income inequality has changed over time, with PSZ arguing that it's increased a lot, and AS denying this. The dispute turned on how one should extrapolate from reported income to unreported income, which pretty much by definition we can't measure directly. My guess is that a lot of people I know would be strongly disposed to side with one group or the other upon hearing about the dispute. Progressives concerned with inequality will side with PSZ, libertarians with more faith in markets with AS. But for me, while I guess I'm not *completely* neutral, it's so obvious to me that I can't understand the guts of the debate--I have no idea how to estimate shares of unreported income!--and only have weakly reliable, superficial cues to go on, that if it turned out with better data one side was vindicated completely, I would *not* bet at steep odds on which side it would be.

I think there's a tension between accepting all the dynamics you're pointing out but complacently holding on to your political opinions, with the cognitive upshot of this acceptance coming only in that you concede that the other side is also rational. Conceding they're also rational should have some relation to thinking they might be right, which should put some pressure on your own views.

Expand full comment

Thanks Dan! Nice point. The example is a good one, and definitely one where (temperamentally) I'm right there with you. I think the cases that make skepticism look pretty bad to me are cases like "re-electing Trump will help the US economy" or (allowing myself to include partly normative claims in there) "Trump was a good president in his first term". I'm guessing few people will feel the draw of skepticism on those claims.

I think the argument then goes something like: (1) we use the same sorts of processes to form beliefs about economics as we do about politics, so (2) we should think our beliefs about one are generally rational [not: right] iff the other are; but (3) we must in the politics case [since we're not going to be skeptics], so (4) we should in the other cases too.

Laid out like that, there are a lot of place to get off the boat. Where do you get off? I'm guessing (1), but maybe you're also willing to be more skeptical in the politics case than me, so possibly (3)?

Expand full comment

Yeah I find it hard to generalize about categories of cases; certainly don't want to be skeptical about political beliefs across the board. I think my biggest picture differences with you are that (1) I'm happy to be more agnostic about politics, but also, where I'm not agnostic about politics, (2) I'm more willing to see some rational asymmetries between myself and people who disagree with me. That's in part because that I'm not interested in (fully) rationalizing as wide a range of people's beliefs as you are. I *do* like your arguments for there being a lot of rational symmetry between political partisans (I don't exactly know how to define "partisan" but I think I've got a reasonably good grip on the idea), but I think one can aspire (quixotically, perhaps?) to being *more* rational about politics than the typical ureconstructed partisan. Basically, I like Jason Brennan's "Hobbits, Hooligans, and Vulcans" taxonomy, and I interpret your arguments as showing that there's basically rational parity between left-hooligans and right-hooligans.

On the specific cases you mention, I suspect I'm actually pretty agnostic about what effects a second Trump presidency would have on the economy. In general, I think people overrate how much credit or blame the president should get for the economy. (And fwiw, I think that view is pretty common among people I regard as thoughtful, informed commentators on political economy.) I think a lot would turn on how you precisified "helpful for the economy"--higher GDP growth? less income inequality? smaller trade deficits? (shudder)--which might mean that disagreement about what initially looks like an empirical question turns out to really amount to differences of value.

On whether Trump was on balance a good president in his first term, I'll concede that I can't really get into a skeptical frame of mind on that one. But I also don't really see many people who (in my judgment) aren't partisans ending up very pro-Trump. I feel like over the Trump years I ended up reading more conservatives than I used to, in part because the Trump phenomenon pushed so many thoughtful, intelligent conservatives out of their political tribe, so then you ended up with a lot of nevertrumpers offering nuanced, insightful views that exhibited more ideological independence than the typical fare on the mainstream left or the Trump right. In short, right-hooligans like Trump, but right-vulcans mostly don't.

Expand full comment
Jan 4Edited

Excellent essay, I don't have much to quibble with. I'm left of center politically and I see most Republicans acting very rationally. It's rational for them to be selfish and want to pay less in taxes. It's rational of them to lie about Democrats, or at least easily accept such lies from others, because they want to win elections. It's rational to support Republican candidates if you're against abortion even if you would benefit economically from Democratic policies.

In the Prisoner's Dilemma the selfish strategy strictly dominates the altruistic strategy, so it's rational to play it. But what if the other prisoner is your spouse? I propose an experiment that adjusts the payouts until you decide to change your strategy. The ratio of increased incarceration you're willing to endure so that the other prisoner's incarceration is reduced would be the altruism factor. It's rational to have zero altruism when you're completely selfish.

In short, I don't think the people I disagree with politically are any of the 5 things you described, I just think the richest are selfish and the poorest are altruistic but have very different culture/values. What you overlook in my opinion is the active disinformation campaign by the rich Republicans to get the poor Republicans to vote. E.g. the active propaganda from oil companies against climate change, the Club for Growth against taxes, Wall Street against regulations, etc. There is no doubt that regular people exist who believe some well crafted lies, how would you characterize them?

I am a professional logician and I'm sure I believe some (but probably fewer) such lies from the other side. Let's take, for example, Hunter Biden's laptop. What does it actually show? I still have no idea.

Expand full comment

Thanks Seth! I definitely agree that a big part of the equation is the role of media in convincing large chunks of the population of things. I'd characterize them as rationally misled—and then we could get into a debate over how to feel about such people, especially when the things they're doing might be actively harmful for you and yours.

I think another thing you're pushing on something I didn't talk much about: a big chunk of reasons for lacking epistemic empathy are not that you think the other side is irrational, but that you think they are rationally pursuing bad goals (or ones you disagree with). I might feel very negatively toward a self-interested, rational economist who is screwing me in the prisoner's dilemma! This is something I should think and talk more about, I think.

Thanks!

Expand full comment

Great stuff! Especially liked the point about baselessness.

I was wondering about the double-standard: "Either (i) people should listen to their social circles and trusted authorities, or (ii) they shouldn’t."

Won't this depend on the quality of one's circle, and the grounds on which one trusts them?

Suppose I listen to a certain expert because she made a bunch of true predictions. Isn't that different from someone who listens to an entertaining crank whose predictions turn out to be false? (Think of someone who's still into Qanon despite the lack of a "storm.") I can't help but think that, at some point, a rational agent would hit the bricks and look for better experts.

Expand full comment

Thanks Dan! You're right that this is a bit quick. I think Other-Dan's response is helpful. I guess I'm thinking that in those sorts of cases people are often right about all sorts of (small) things, so that it's a really hard problem to figure out what the right base rate to keep in mind is.

That said, I definitely think SOME people SOMEtimes are driven to their beliefs by (epistemic) irrationality. Some cases of QAnon might be the right case of that (though I'd think there can be plenty of rational ways in too).

I haven't done a deep dive into that sort of conspiracy theory, but I also wonder how much of it starts as acting and then evolves from there. It might well be perfectly (practically) rational to go along with it for social connection or whatever, and not expect your true beliefs to actually shift. You could even see it as due to an OVERestimation of the power of rationality: "I know I'm rational, so won't actually fall for this stuff" only to later be swept up in it because (surprise) rationality doesn't guard against going off the rails in ambiguous/complex information environments. Maybe? Hadn't thought about that much before.

Thanks again!

Expand full comment

Fwiw I think this is right but just doesn't end up cutting all that much ice.

Most cable news types (Carlsen, Maddow) don't make lots of concrete easily checkable predictions, or even if they do they certainly don't present them in a form where it's easy to come back and check months or years later to see how they did. I think it's a wonderful idea that the rationalist/ea community has to encourage people to make more concrete predictions with probabilities attached that can then be checked later, of the sort that are explicitly scored in Tetlock style super forecasting tournaments. But it's not an idea that's widely caught on--perhaps for obvious reasons because if you have a wide following already why make it easy for them to get good reasons to distrust your judgment?--and given that people don't do this it's really hard to get concrete evidence that somebody whose judgment you trusted turned out to be pretty unreliable.

To the extent that this is done it's done in a kind of cherry picked way that makes it too easy to dismiss; "I was right about Trump," "he was wrong about Iraq" , etc. plenty of people who now acknowledge themselves to have been wrong about Iraq and then Trump can convince themselves and their audiences, rightly or wrongly, that these were anomalous mistakes.

Expand full comment

Cool post! I'm interested in the idea that some things you say seem to suggest that thinking hard (and doing empirical studies) about rationality won't help solve issues that are commonly taken to depend on the extent to which human beings are rational. For instance, the idea that being rational cannot save you from deriving absurdities or that, since polarization has a rational mechanism, thinking rationally about politics won't eliminate the effects of it. If you are right, thinking rationally about rationality won't solve the issues we commonly think are driven by irrationality (because they aren't). At least in principle, it seems unattractive to go non-rationalist and say that solution to these issues depends mainly on some non-rational devices (like: absent a rational way to convince a political opponent, use coercion; or: emotional responses have the key in eliminating some forms of attachment to past decisions). But as things seem, perhaps thinking about the limits on what rationality can do for us really needs endorsing that some non-rational ways of solving some issues are necessary. Would you agree in making that connection between rational mechanisms of things that have bad effects and policies focusing on non-rational fixes of those effects?

Expand full comment

Nice point! A few thoughts.

1) I like the point that, if things I say are right, rationally thinking about rationality won't predictably get you to the truth about rationality. I do buy that, though of course there's a bit of a felt instability there. Fun puzzle. It would explain, at least, why philosophers/economists/psychologists debate the topic endlessly, without agreement!

2)It might be useful to distinguish different levels of ideality. In any of the models that I use where rational processing can lead to polarization (etc), there are *more ideal* versions of those agents that wouldn't polarize. Those agents have access to abilities that real people don't (like the ability to introspect their own opinions, and/or to disambiguate their evidence)—that's the reason why I don't think the better possibility shows that the worse way of doing things is irrational. (Compare: IDEALLY, we'd all be omniscient.) But it does suggest that even if rationality alone (in the context of ambiguity) won't help us, there might be ways to improve processing to make those more-ideal modes of processing within reach. (If you did, then rationality would require you to use them and so start leading to better results.)

3) That bears on your main question, I think. It suggests that there can be "non-rational" ways of improving people's conclusions that aren't necessary *ir*rational (like inflaming their emotions or whatnot), but have to do with making reasoning in a sense easier. It reminds me a bit of what Phillip Tetlock and company do when it comes to trying to give people reasoning tools to improve their predictions. (Sometimes it works!)

But yes, as you say, I do think there's a benefit to realizing that there are severe limits on what rationality ALONE will get you, and we need to think more about other mechanisms—from institutional design to better norms of discourse.

Expand full comment