2 Comments

1. Fundamentally this article, in giving its theoretical argument that we should expect rigidity of the extremes assumes (implicitly) a framing narrative about how politics works that- while it can be found on both the left and right- is most common in the center. That framing theory is that political differences are primarily driven by different assessments of evidence. There is another possibility - that differences in political views are driven, at root, by differences in fundamental values. The truth is doubtless a blend of these, but if the normative roots of political disagreement are more important than the positive roots, it’s unclear this argument will stand up.

2. We need to think about sociological causes and correlates of cognitive flexibility. The cognitive flexibility metrics mentioned in the article look to me suspiciously like measures of intelligence, and things closely related to intelligence[1] and other cognitive capabilities associated with socio-economic success. Regardless, I anticipate, as someone on the left myself, that it is very possible centrists may be on average smarter than leftists and rightists. Why? Because intelligence is associated with success, and being successful is associated with not wanting to rock the boat and thinking the system is fundamentally sound- in effect the centrist position.

FN: [1]- Googling, correlations are around r=0.4, and would doubtless be higher controlling for measurement imperfections.

Expand full comment
author

These are fantastic points, thanks! A couple thoughts:

1) I agree that the model naturally lends itself to factual disagreements as giving rise to political ones. But two thoughts.

(i) First, I think there's pretty good evidence that values tend to be at least highly correlated with factual beliefs. This is pretty obvious in some cases, eg how much value people assign to environmental policies is going to correlate with how dire they think the effects of (lack of) environmental policies are—just because most people have some regard for consequences in their values. I think studies support this pretty strongly for climate change in particular (eg this one: https://link.springer.com/article/10.1057/s41269-022-00265-4), and in general there's a correlation between how "overprecise" people are in their estimates of quantities and their degree of affective polarization (https://www.sciencedirect.com/science/article/abs/pii/S221480431830418X)

ii) Although I gave an example of an empirical quantity, there's nothing in the model that prevents us from using a normative quantity in there. Eg let µ be the degree to which conservative values are correct or whatnot. Of course, hard-nosed empiricists might not like the idea of modeling that as a quantity that we could get evidence about—but that'll depend on debates about the metaphysics and epistemology of value facts and how we could get evidence about them. If there's such a thing as a reasonable vs unreasonable position on normative question X, and arguments for each side can provide evidence about that, then I'd think we should be able to apply the model to normative question X too.

2) Very good point! I totally agree that that's a confounder in the empirical studies. (It's been awhile since I read the details—can't remember if they did any controls for this.) Interestingly, it's not a factor in the simulations (unless it's showing up indirectly—maybe those who condition more regularly would score better on IQ tests?), so it isn't essential to get the result. But I'd need to think more about this one...

Thanks again!

Expand full comment