Discover more from Stranger Apologies
A blog about why people are more rational than you think
TLDR: We’ve lost our epistemic empathy. Irrationalist narratives from psychology are partly to blame. Recent work from psychology and philosophy changes the narrative. Sign up to see why.
68% of Republicans think Trump’s recent indictment is unfair. 89% of Democrats describe Republicans as ‘racist’. Half of AI researchers think there’s at least a 10% chance that AI will overthrow humanity.
You probably think that some (or all) of these views are—to put it politely—mistaken. But you probably aren’t inclined to put it politely.
This is familiar in politics. In the last few years, people have become sharply more inclined to label the other side as close-minded, dishonest, immoral, and unintelligent:
More generally, over the last couple decades partisan antipathy has tripled:
Our negativity extends outside politics. Trends in music lyrics show a sharp decrease in occurrences of ‘love’ and increase in ‘hate’ over the same period:
Likewise, we’re increasingly happy to describe others with irrationalist terms of abuse like “crazy”, “stupid”, and “fool”:
We seem to be losing our epistemic empathy: our ability to both be convinced that someone is wrong, and yet acknowledge that there are sensible reasons that led them to their opinions.
I think this is a disaster.
Put me in the camp of those who think the greatest threat to democratic institutions comes from the breakdown of toleration between parties, an increasing willingness to demonize the other side, and the the tit-for-tat electoral hardball that ensues.
That’s controversial. But even if you disagree, surely you’ll agree that our loss of epistemic empathy is a problem.
It’s also misguided.
It’s a difficult question—one I’ll explore in future posts—why exactly we’ve become so quick to attribute irrationality and bias to those who disagree with us. But one clear factor was the rise of irrationalist narratives from academic psychology and behavioral economics.
Starting in the 70s, the “heuristics and biases” research program began to paint a picture of human thinking as riddled with simple errors and systematic biases. It was the dominant paradigm for several decades, generating a list of over 200 cognitive biases and helping spur the behavioral turn in economics.
Although the seminal work was done in the 70s, it took awhile to reach the popular imagination. When do you think “heuristics and biases” started to become common parlance?
You guessed it. The rate has quintupled since the mid-90s:
Kahneman, Tversky, and the heuristics-and-biases program have now become necessary irrationalist references—waiting in the wings in every op-ed or casual discussion about the crazy things that people believe.
Of course, determining causation here is nearly impossible. Obviously the heuristics-and-biases research program didn’t precipitate our loss of epistemic empathy—broader cultural and political trends did. But equally obviously: having authoritative, ready-to-hand documentation of pervasive irrationality must have given momentum to this loss.
In the meantime, psychology has had a change of heart.
For decades now, a growing number of researchers have complained that the heuristics and biases program raised more questions than it answered. “Why these heuristics?”; “How do we learn heuristics?”; “Can heuristics truly explain human flexibility and performance?”
Failures of the program to make substantial theoretical or predictive progress has led to the rise of a new program—“resource-rational analysis”—that sees human cognition as an approximation to optimal performance, given the mind’s resource constraints. Buoyed by it success in explaining vision, motor control, and neural coding, this program has claimed—with some plausibility—to provide better explanations of many of both the impressive successes and surprising failures of human cognition.
‘Optimality’ is the new buzzword in cognitive science.
Why? The motivating question for these researchers is why it’s taken so long—decades of research and billions of dollars—to make machines that can even begin to see and learn and think like people.
Their answer, in a nutshell: seeing, learning, and thinking are hard problems—computationally intractable, in the technical sense. This is easy to overlook when we focus on humans, but impossible to ignore when we try to make machines that can replicate them.
Although there are reasons for caution about both the details and the bolder claims of this “Bayesian turn” in cognitive science, I think there’s a lot that’s right about it.
Consider this. The heuristics-and-biases argument for human irrationality was based on the fact that we regularly violate the basic norms of reasoning under uncertainty. Perhaps the most widely-maligned instance is the conjunction fallacy—the fact that, when forced to guess based on little information, people will often rate a conjunction A&B as more probable than one of its conjuncts, B. For example, given a description of Linda as a bright and socially active individual, they’ll say (2) is more likely than (1):
Linda is a bank teller.
Linda is a bank teller and a feminist.
This violates one of the simplest laws of probability: every (2)-possibility in which Linda is both a bank teller and a feminist is a (1)-possibility in which she’s a bank teller—but not vice versa! So (2) can’t be more likely than (1).
For decades there’s been a spirited debate about what this shows. The irrationalists argue that people can’t handle reasoning under uncertainty, so they get by with simple heuristics that break down in cases like this. The rationalists points out that it’s trivial to define a computer program that will never commit the conjunction fallacy, and that our brains solve harder inferential tasks every waking minute. So something else must be going on—presumably something involving the flexibility and domain-generality of human cognition.
With the arrival of GPT4—a trillion-parameter model that cost $100 million and perhaps a year to train—we for the first time have an artificial system that comes close to being a domain-general reasoner. If it were your child, you’d be bragging about its test scores: 90th percentile on the Bar Exam, 88th on the LSAT, 80th on the quantitative GRE and 99th on the verbal GRE. Oh, and it passes quantum-computing exams without even taking the course.
It also readily commits the conjunction fallacy:
An irrationalist interpretation of this is that GPT4 is learning our flaws as well as our successes. But that would predict it to score in the 50th percentiles in all these exams, and that its writing would be about as good as the average writing on the internet. That’s wrong. GPT4 seems to be learning our successes better than our flaws.
The natural conclusion: it’s learning to be smart, and these “fallacies” are side effects of adaptive general-reasoning capacities—as the rationalist camp has been saying all along.
There’s obviously much more to be said. But this gives a flavor of the sorts of questions I’ll be asking on this blog:
What, exactly, are the reasons for attributing widespread irrationality? Do they stand up to scrutiny, once informed by psychological and philosophical work on rationality? If not, what does that mean about how we think about our ideological opponents—and ourselves?
For decades, philosophers and psychologists have been converging around a set of methods for thinking about these questions. I think it’s time for some new answers.
If you’re curious, sign up for updates. Some of the posts on the way:
Epistemic empathy: Why the arguments for attributing irrationality to our political opponents are weak.
Bayesian injustice: Why unbiased, rational processing of evidence about groups of people that are known to be equally qualified will often systematically disfavor the disadvantaged group.
Bayesian non-convergence: Why rational people often shouldn’t be expected to converge to the truth.
Subscribe for free to receive new posts.