TLDR: We’ve lost our epistemic empathy. Irrationalist narratives from psychology are partly to blame. Recent work from psychology and philosophy changes the narrative. Sign up to see why.
68% of Republicans think Trump’s recent indictment is unfair. 89% of Democrats describe Republicans as ‘racist’. Half of AI researchers think there’s at least a 10% chance that AI will overthrow humanity.
You probably think that some (or all) of these views are—to put it politely—mistaken. But you probably aren’t inclined to put it politely.
This is familiar in politics. In the last few years, people have become sharply more inclined to label the other side as close-minded, dishonest, immoral, and unintelligent:
More generally, over the last couple decades partisan antipathy has tripled:
Our negativity extends outside politics. Trends in music lyrics show a sharp decrease in occurrences of ‘love’ and increase in ‘hate’ over the same period:
Likewise, we’re increasingly happy to describe others with irrationalist terms of abuse like “crazy”, “stupid”, and “fool”:
Or “ridiculous”, “idiot”, “dumb”, and “insane”:
We seem to be losing our epistemic empathy: our ability to both be convinced that someone is wrong, and yet acknowledge that there are sensible reasons that led them to their opinions.
I think this is a disaster.
Put me in the camp of those who think the greatest threat to democratic institutions comes from the breakdown of toleration between parties, an increasing willingness to demonize the other side, and the the tit-for-tat electoral hardball that ensues.
That’s controversial. But even if you disagree, surely you’ll agree that our loss of epistemic empathy is a problem.
It’s also misguided.
It’s a difficult question—one I’ll explore in future posts—why exactly we’ve become so quick to attribute irrationality and bias to those who disagree with us. But one clear factor was the rise of irrationalist narratives from academic psychology and behavioral economics.
Starting in the 70s, the “heuristics and biases” research program began to paint a picture of human thinking as riddled with simple errors and systematic biases. It was the dominant paradigm for several decades, generating a list of over 200 cognitive biases and helping spur the behavioral turn in economics.
Although the seminal work was done in the 70s, it took awhile to reach the popular imagination. When do you think “heuristics and biases” started to become common parlance?
You guessed it. The rate has quintupled since the mid-90s:
This was helped by the fact that Daniel Kahneman—who, with Amos Tversky, coined the term—won the Nobel prize in economics in 2002. Here are his mentions and citation-rates before and after:
Kahneman, Tversky, and the heuristics-and-biases program have now become necessary irrationalist references—waiting in the wings in every op-ed or casual discussion about the crazy things that people believe.
Of course, determining causation here is nearly impossible. Obviously the heuristics-and-biases research program didn’t precipitate our loss of epistemic empathy—broader cultural and political trends did. But equally obviously: having authoritative, ready-to-hand documentation of pervasive irrationality must have given momentum to this loss.
In the meantime, psychology has had a change of heart.
For decades now, a growing number of researchers have complained that the heuristics and biases program raised more questions than it answered. “Why these heuristics?”; “How do we learn heuristics?”; “Can heuristics truly explain human flexibility and performance?”
Failures of the program to make substantial theoretical or predictive progress has led to the rise of a new program—“resource-rational analysis”—that sees human cognition as an approximation to optimal performance, given the mind’s resource constraints. Buoyed by it success in explaining vision, motor control, and neural coding, this program has claimed—with some plausibility—to provide better explanations of many of both the impressive successes and surprising failures of human cognition.
‘Optimality’ is the new buzzword in cognitive science.
Why? The motivating question for these researchers is why it’s taken so long—decades of research and billions of dollars—to make machines that can even begin to see and learn and think like people.
Their answer, in a nutshell: seeing, learning, and thinking are hard problems—computationally intractable, in the technical sense. This is easy to overlook when we focus on humans, but impossible to ignore when we try to make machines that can replicate them.
Although there are reasons for caution about both the details and the bolder claims of this “Bayesian turn” in cognitive science, I think there’s a lot that’s right about it.
Consider this. The heuristics-and-biases argument for human irrationality was based on the fact that we regularly violate the basic norms of reasoning under uncertainty. Perhaps the most widely-maligned instance is the conjunction fallacy—the fact that, when forced to guess based on little information, people will often rate a conjunction A&B as more probable than one of its conjuncts, B. For example, given a description of Linda as a bright and socially active individual, they’ll say (2) is more likely than (1):
Linda is a bank teller.
Linda is a bank teller and a feminist.
This violates one of the simplest laws of probability: every (2)-possibility in which Linda is both a bank teller and a feminist is a (1)-possibility in which she’s a bank teller—but not vice versa! So (2) can’t be more likely than (1).
For decades there’s been a spirited debate about what this shows. The irrationalists argue that people can’t handle reasoning under uncertainty, so they get by with simple heuristics that break down in cases like this. The rationalists points out that it’s trivial to define a computer program that will never commit the conjunction fallacy, and that our brains solve harder inferential tasks every waking minute. So something else must be going on—presumably something involving the flexibility and domain-generality of human cognition.
With the arrival of GPT4—a trillion-parameter model that cost $100 million and perhaps a year to train—we for the first time have an artificial system that comes close to being a domain-general reasoner. If it were your child, you’d be bragging about its test scores: 90th percentile on the Bar Exam, 88th on the LSAT, 80th on the quantitative GRE and 99th on the verbal GRE. Oh, and it passes quantum-computing exams without even taking the course.
It also readily commits the conjunction fallacy:
I’ve also gotten it to replicate the overconfidence effect, the gambler’s fallacy, the base rate fallacy, and the inertia effect.
An irrationalist interpretation of this is that GPT4 is learning our flaws as well as our successes. But that would predict it to score in the 50th percentiles in all these exams, and that its writing would be about as good as the average writing on the internet. That’s wrong. GPT4 seems to be learning our successes better than our flaws.
The natural conclusion: it’s learning to be smart, and these “fallacies” are side effects of adaptive general-reasoning capacities—as the rationalist camp has been saying all along.
There’s obviously much more to be said. But this gives a flavor of the sorts of questions I’ll be asking on this blog:
What, exactly, are the reasons for attributing widespread irrationality? Do they stand up to scrutiny, once informed by psychological and philosophical work on rationality? If not, what does that mean about how we think about our ideological opponents—and ourselves?
For decades, philosophers and psychologists have been converging around a set of methods for thinking about these questions. I think it’s time for some new answers.
If you’re curious, sign up for updates. Some of the posts on the way:
Epistemic empathy: Why the arguments for attributing irrationality to our political opponents are weak.
Bayesian injustice: Why unbiased, rational processing of evidence about groups of people that are known to be equally qualified will often systematically disfavor the disadvantaged group.
Bayesian non-convergence: Why rational people often shouldn’t be expected to converge to the truth.
Etc.: Why the backfire effect is sometimes rational, the representativeness heuristic might be adaptive, and and why you should think twice before making small talk about how bad the world is.
Isn't thinking that Trump supporters are dumb and have been duped into supporting him *more charitable* than thinking they are intelligent and knowingly support his behavior and policies?
'...surely you’ll agree that our loss of epistemic empathy is a problem'
Nope.
Only within the confines of a specific, and quite narrow, frame can this contention be formulated.
According to this frame, there was once an idealized (mythic, truth be told) arena of, dare I say, gentlemenly debate, where the concerns of the day were subjected to the crucible of informed discourse between interlocutors accorded mutual respect, in the spirit of open inquiry and shared purpose to improve the state of the polity. In the most important instances, individuals (those of merit, at least) could be relied upon to set aside their biases and petty interests, and engage in sound reasoning, all conscious and logical-like. Rational, even.
There's a catch, unfortunately. That proverbial fly in the ointment.
This frame, and with it the claim of status quo rationalism, have no basis in reality.
Never once in this society (nor any I'm acquainted with) has such an elysian realm existed, although this mythos of an intellectual paradise lost to the corrupt and base rantings of the hoi poloi (perhaps of the highly suspect postmodernist cohort) is popular among an oddly aggrieved band of otherwise highly privileged speakers.
Hidden motives abound in the efforts to pronounce rationalism victorious, hidden (ironically) even to many who most loudly proclaim its triumph. No ground is more fertile for motivated reasoning and self-deception than the mouldering soil of the rationalist project (except perhaps in the sandbox of Large Language Models):
"Self-serving beliefs can also be generated ad hoc through contrived cover stories, as shown by Kunda in a series of elegant demonstrations (Kunda 1990). In one case, subjects were asked to evaluate the credibility of a (fake) scientific study linking coffee consumption and breast cancer. Female subjects who also happened to be heavy coffee drinkers were especially critical of the study, and the least persuaded by the presented evidence. This is only a sample of the literature documenting how evidence consistent with the favoured hypothesis receives preferential treatment (Ditto & Lopez 1992; Dawson et al. 2002; Norton et al. 2004; Balcetis & Dunning 2006). Moreover, this phenomenon occurs largely outside of awareness (Kunda 1987; Pyszczynski & Greenberg 1987; Pronin et al. 2004). No one questions the reality of motivated reasoning or perception. The critical issue is whether motivational biases are sufficient to explain self-deception." (Mijovic-Prelec and Prelec, 2010) (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2827460/)
For a comprehensive dismantling of the rationalist paradigm, and scrupulous history of the horrifying sociopolitcal products of the rationalist project, see Peter Sloterdijk's 'Critique of Cynical Reason'.