In Defense of Epistemic Empathy
Knocking down the standard reasons for thinking that your opponents are dumb.
TLDR: Why think your ideological opponents are unreasonable? Common reasons: their views are (1) absurd, or (2) refutable, or (3) baseless, or (4) conformist, or (5) irrational. None are convincing.
Elizabeth is skeptical about the results of the 2020 election. Theo thinks Republicans are planning to institute a theocracy. Alan is convinced that AI will soon take over the world.
You probably think some (or all) of them are unhinged.
As I’ve argued before, we seem to be losing our epistemic empathy: our ability to both (1) be convinced that someone’s opinions are wrong, and yet (2) acknowledge that they might hold those opinions for reasonable reasons. For example, since the 90s our descriptions of others as ‘crazy’, ‘stupid’ or ‘fools’ has skyrocketed:
The point of much of my work is to try to help us recover our epistemic empathy—to argue that reasonable processes can drive such disagreements, and that we have little evidence that irrationality (the philosophers’ term for being “crazy”, “stupid”, or a “fool”) explains it.
The most common reaction: “Clever argument. But surely you don’t believe it!”
I do.
Obviously people sometimes act and think irrationally. Obviously that sometimes helps explain how they end up with mistaken opinions. The question is whether we have good reason to think that this is generically the explanation for why people have such different opinions than we do.
Today, I want to take a critical look at some of the arguments people give for suspending their epistemic empathy: (1) that their views are absurd; (2) that the questions have easy answers; (3) that they don’t have good reasons for their beliefs; (4) that they’re just conforming to their group; and (5) that they’re irrational.
None are convincing.
Absurdity.
“Sure, reasonable people can disagree on some topics. But the opinions of Elizabeth, Theo, and Alan are so absurd that only irrationality could explain it.”
This argument over-states the power of rationality.
Spend a few years in academia, and you’ll see why. Especially in philosophy, it’ll become extremely salient that reasonable people often wind up with absurd views.
David Lewis thought that there were talking donkeys. (Since the best metaphysical system is one in which every possible world we can imagine is the way some spatio-temporally isolated world actually is.)
Timothy Williamson thinks that it’s impossible for me to not have existed—even if I’d never been born, I would’ve been something or other. (Since the best logical system is one on which necessarily everything necessarily exists.)
Peter Singer thinks that the fact that you failed to give $4,000 to the Against Malaria Fund this morning is the moral equivalent of ignoring a drowning toddler as you walked into work. (Since there turns out to be no morally significant difference between the cases.)
And plenty of reasonable people (including sophisticated philosophers) think both of the following:
It’s monstrous to run over a bunny instead of slamming on your brakes, even if doing so would hold up traffic significantly; yet
It’s totally fine to eat the carcass of an animal that was tortured for its entire life (in a factory farm), instead of eating a slightly-less-exciting meal of beans and rice.
David Lewis, Tim Williamson, and Peter Singer are brilliant, careful thinkers. Rationality is no guard against absurdity.
Ease.
“Unlike philosophical disputes, political issues just aren’t that difficult.”
This argument belies common sense.
There are plenty of easy questions that we are not polarized over. Is brushing you teeth a good idea? Are Snickers bars healthy? What color is grass? Etc.
Meanwhile, the sorts of issues that people polarize over almost always involve complex, chaotic systems that only large networks of people (with strong norms of trust, honesty, and reliability) can hope to figure out. Hot-button political issues—vaccine safety, election integrity, climate change, police violence, gender dynamics, etc.—are all topics which require amalgamating massive amounts of disparate evidence from innumerable sources.
If you doubt this, switch from qualitative to quantitative questions. Instead of “Is the climate changing?” ask “How many degrees will atmospheric temperatures increase by 2100?” Instead of “Is police violence a problem?” ask “How much does police violence (vs. economic inequality, generational poverty, institutional racism, etc.) harm people of color?” Even with complete institutional trust, we don’t know!
Back to the qualitative questions. They may be “easy” to answer for anyone who shares your life experiences, social circles, and patterns of trust. But most people who disagree with you don’t share them.
Nor is it easy to figure out whom to trust. It may seem obvious to you that the trustworthy sources are X, Y, and Z. But networks of trust are built up over an entire lifetime, amalgamating countless experiences of who-said-what-when—you can’t figure it out from the armchair or a quick google search.
To see this, imagine what would happen if tomorrow all your friends and co-workers started voicing extreme skepticism about your favorite news networks and scientific organizations. I doubt it’d take long before you to became unsure whether to trust them.
But that is the position of your political opponents! Most of their friends and co-workers do think that your favored (X, Y, and Z) sources are unreliable. And, more generally, they’ve had a different lifetime of amalgamating different experiences leading to different networks of trust. No wonder they think differently—you would too, in their shoes.
Baselessness.
“I’ve talked to people who believe these things, and they don’t have any good reasons.”
This argument underestimates the communicative divide between people with radically different viewpoints.
I recently had the following experience. One day I gave a lecture to a room full of philosophers who said (I hope, honestly!) that I gave articulate defense of the rationality of a form of confirmation bias. The next day, I was having coffee with a friend who’s a biologist, and I tried to explain the idea.
It didn’t go well. I gave examples that were confusing. I referred to concepts they didn’t know. I started to explain them, only to realize the concept was unnecessary. I backtracked and vacillated. In short: I was an inarticulate mess.
Most academics have had similar experiences. Why?
Conversations always take place within a common ground of mutual presuppositions. This is extremely useful, because it allows us to move much quicker over familiar territory to get to new ideas—at least when our audience shares our presuppositions.
But it causes a problem when we get used to talking in that way. When we talk to someone who doesn’t share those presuppositions, we find ourselves having to unlearn a bunch of conversational habits on the fly. This is hard, so often the conversation runs aground. (This is why academics are so often so bad at explaining their research to non-academics.)
Talking across the political aisle raises exactly the same problem: it pulls the conversational rug of shared presuppositions out from under us.
To see why, consider the following thought experiment. Suppose you think that climate-change is human caused. Now you find yourself in an Uber, talking to someone who—though open and curious—is unsure about that. It’s clear that he watches completely different media than you, has no direct experience with scientists or the institution of science, has a completely different social network, and is generically skeptical about powerful institutions. You have 5 minutes to explain why you believe in climate change. How well do you think you’ll do?
Not well! (If you doubt this, try it—you’re likely suffering from an illusion of explanatory depth.)
The result? Your interlocutor would probably come away thinking that you don’t have any good reasons for your views. But the fact that you seemed this way to him in a brief conversation obviously doesn’t show that you don’t have good reasons for your beliefs—rather, it shows how hard it is to convey those good reasons across such large communicative divide.
Conformity.
“People are too conformist, and to the wrong sources.”
This argument uses a double-standard.
Either (i) people should listen to their social circles and trusted authorities, or (ii) they shouldn’t.
If (i) they should, the result will be that those with very different patterns of social trust than you should believe very different things. This is what happens to people like Elizabeth. When their friends, co-workers, and regular media outlets spend months talking about suspicious facts about the 2020 election, of course it’s reasonable for them to become skeptical. (Wouldn’t you do the same, if your friends and regular media outlets started saying such things about—say—the 2016 election?)
On the other hand, if (ii) people shouldn’t listen to their social circles and trusted authorities, then a knock-on effect will be excess skepticism. This is what happens to those who are skeptical of vaccines, climate change, or science generally. In fact, people who believe in conspiracy theories have usually “done their own research”— people who are skeptical of the covid-19 vaccines know much more about the fine details of their development than I do.
Psychology.
“Psychology and behavioral economics have shown that people are systematically irrational.”
They really haven’t.
That’s what this blog is about. It’s my attempt to step outside my presuppositional bubble and explain why there’s reason to be skeptical of narratives of irrationality.
In short: empirical work on irrationality always presupposes normative claims about what rational people would think or do in the situation. Often those claims are oversimplified—or just wrong.
I’ve argued that the is true for research on overconfidence, belief persistence, confirmation bias, the conjunction fallacy, the gambler’s fallacy, and polarization. Stay tuned for more.
That’s my attempt to say why I’m unconvinced by the common reasons for suspending epistemic empathy. Maybe we really should think that those who disagree with us have good reasons to.
But I recognize the meta-point: lots of people disagree with me, about this! They’re convinced that irrationality is what drives many societal disagreements. And I’d be a hypocrite if I didn’t think those beliefs were reasonable, too. I do. (Which is not to say I think those beliefs are true—thinking a belief is reasonable is compatible with thinking it’s wrong.)
So please: share your reasons! I want to know why you think your political opponents are irrational, so I can better work out why (and to what degree) I disagree.
Wonderful post!
While I think I largely agree, I want to draw attention to something you mention only briefly. I don't think skepticism should only be used as a reductio. I think taking seriously the dynamics you're identifying *ought* to create a lot of pressure towards skepticism.
I was just reading about an exchange between economists Piketty, Saez, and Zucman (PSZ) on the one hand and Auten and Splinter (AS) on the other. It concerned the question of how income inequality has changed over time, with PSZ arguing that it's increased a lot, and AS denying this. The dispute turned on how one should extrapolate from reported income to unreported income, which pretty much by definition we can't measure directly. My guess is that a lot of people I know would be strongly disposed to side with one group or the other upon hearing about the dispute. Progressives concerned with inequality will side with PSZ, libertarians with more faith in markets with AS. But for me, while I guess I'm not *completely* neutral, it's so obvious to me that I can't understand the guts of the debate--I have no idea how to estimate shares of unreported income!--and only have weakly reliable, superficial cues to go on, that if it turned out with better data one side was vindicated completely, I would *not* bet at steep odds on which side it would be.
I think there's a tension between accepting all the dynamics you're pointing out but complacently holding on to your political opinions, with the cognitive upshot of this acceptance coming only in that you concede that the other side is also rational. Conceding they're also rational should have some relation to thinking they might be right, which should put some pressure on your own views.
Excellent essay, I don't have much to quibble with. I'm left of center politically and I see most Republicans acting very rationally. It's rational for them to be selfish and want to pay less in taxes. It's rational of them to lie about Democrats, or at least easily accept such lies from others, because they want to win elections. It's rational to support Republican candidates if you're against abortion even if you would benefit economically from Democratic policies.
In the Prisoner's Dilemma the selfish strategy strictly dominates the altruistic strategy, so it's rational to play it. But what if the other prisoner is your spouse? I propose an experiment that adjusts the payouts until you decide to change your strategy. The ratio of increased incarceration you're willing to endure so that the other prisoner's incarceration is reduced would be the altruism factor. It's rational to have zero altruism when you're completely selfish.
In short, I don't think the people I disagree with politically are any of the 5 things you described, I just think the richest are selfish and the poorest are altruistic but have very different culture/values. What you overlook in my opinion is the active disinformation campaign by the rich Republicans to get the poor Republicans to vote. E.g. the active propaganda from oil companies against climate change, the Club for Growth against taxes, Wall Street against regulations, etc. There is no doubt that regular people exist who believe some well crafted lies, how would you characterize them?
I am a professional logician and I'm sure I believe some (but probably fewer) such lies from the other side. Let's take, for example, Hunter Biden's laptop. What does it actually show? I still have no idea.