16 Comments
Mar 9Liked by Kevin Dorst

That's very cool. Is the idea of agents with limited resources updating on questions that are less fine-grained than the ones that their incoming evidence would allow them to answer already out there in the literature other places? Because even independent of the application to polarization, that's super interesting.

Expand full comment
author

Thanks Andy! Agreed—I've started playing with the models a bit, and it seems like there might be other interesting stuff there.

I'm not sure! I'd been hearing people give informal epistemology talks on this sort of thing a lot recently, but hadn't seen a formal model of it that looked like this yet. I figured by writing the post, someone would point me to it if it were out there. (Not yet.)

Definitely let me know if you come across anything!

Expand full comment
Feb 27Liked by Kevin Dorst

Thanks for writing this, very cool model! I'd be interested in seeing how much this changes if you extend the model to include others' beliefs as evidence (as you mention when talking about whether disagreements persist). In particular, it seems natural to think that Polder's fans and critics would find it at least a little useful to talk to each other -- since they're tracking different things -- and so even if they think the other is biased, presumably this might slow polarisation?

Excited to see your future work on this!

Expand full comment
author

Definitely! I think you're exactly right here. It's a bit subtle to know exactly how to set it up, but if anything I'm worried that if we make them TOO good at understanding how each other update, having them share their opinions will actually erase the disagreement entirely. I have a few thoughts about why that might not happen, especially if they can't incorporate all the evidence that comes from talking. But will definitely be thinking more about this, and maybe doing a later post on it.

Thanks!

Expand full comment

Thanks for writing this. As a layman, I think of Bayesian epistemology as an idealized, rational model of updating beliefs on the face of new evidence. An ideal that no one can live up to but there's some value in trying. So it's not that surprising to see that we run into problems when we relax that "idealized, rational" bit by making our questions conditional on the evidence. I'd also expect to see empirical evidence for this in the behavioral science literature (humans are gonna bias). But not sure how that would affect the usefulness of bayesian epistemology as an idealized model of human behavior (a claim on how we ought to update). It was never intended to be a good predictor of human action anyways (I think? Layman disclaimer.).

Expand full comment
author

Thanks!

It definitely has a long history as a normatively ideal model, but a fair bit of interest in it is actually driven by seeing lots of ways in which real people's updating is often a good approximation of Bayesian update. That initially was part of the reason the heuristics-and-biases program got so popular (it seemed to show that Bayesianism was barking up the wrong tree, when it came to the general features of human judgment). But nowadays lots of cognitive scientists DO think that approximate Bayesian inference is, at some level of explanation, what the brain is doing (eg https://press.princeton.edu/books/paperback/9780691205717/what-makes-us-smart).

So the way I'm seeing this is (1) yes, definitely, not a big surprise that de-idealizing will lead to bad consequences, but (2) maybe this sort of model can both explain why humans do so WELL at inference sometimes (when they're asking good questions), but so POORLY other times—all the while seeing what we're doing as in some sense "approximating" Bayesian inference

Expand full comment

My understanding from reading popular pieces by the "predictive processing" crowd (Anil Seth, Andy Clark, Karl Friston etc) who place Bayesian inference at the center of perception and action was that, same evidence is often weighted in diverse ways, or we seek different kinds of evidence since our uncertainties about or predictions of the world can be very different. So, that descriptive Bayesian picture of cognition leaves room for (or even attempts to predict and explain) the biases that can be involved in precision weighting and inference processes. And I thought those kind of biases fit really well within that "bayesian brain" framework.

I guess my confusion is due to me thinking of this bayesian brain hypothesis (an attempt at describing cognition, very low level) to be quite distinct from the normative and idealized model of rational bayesian updating (a prescription on how to engage in truth-seeking, higher level), both in substance and goals. My impression was that you were responding to the latter, rather than the former.

Appreciate you for taking the time to write a detailed response btw!

Expand full comment
author

Got it. Yes, there very much is the low-level, predictive processing picture of the mind which gives pride of place to (approximations to) Bayesian inference.

I actually had in mind different folks in cognitive science—the "rational analysis" or "resource-rational analysis" folks (eg https://cocosci.princeton.edu/papers/lieder_resource.pdf) who understand (approximations to) Bayesian models to be a good high ("computational")-level description of what problems the mind is solving, as well as (often) an approximate middle ("algorithmic")-level description of how it does so.

But you're definitely right that Bayesian models are used in many different ways, and in LOTS of contexts—especially contexts in which they are said to be free from various biases—they are seen as an idealized model of how ideally rational agent would reason. In that sense, the sort of model I'm sketching here is definitely non-ideal, so you're right about that!

Expand full comment

I see. Haven't heard of those approaches before, will definitely check them out! (Really liked the Thomas Griffiths' popular book on algorithms, never read his academic work though.)

Expand full comment
Feb 24Liked by Kevin Dorst

Great piece!

Expand full comment
author

Thank you!

Expand full comment

re: If you know of related work on Bayesian models of limited attention, please let me know!

People in the computer vision field, as well as the neuroscience of vision field have been working on Bayesian models of limited attention for quite some time now. see, for instance this old paper (from 2009): _A Bayesian inference theory of attention: neuroscience and algorithms_ (Sharat Chikkerur, Thomas Serre and Tomaso Poggio Center for Biological and Computational Learning, MIT) https://dspace.mit.edu/bitstream/handle/1721.1/49416/MIT-CSAIL-TR-2009-047.pdf I don't know whether the insights from these fields generalise to your field -- figuring out whether they do seems to be the meat of the problem. And I stopped paying a lot of attention to comptuer vision in the 2010s, so I am in no way up to date on what current thought is. But popping over to Tomasso Poggio's lab at MIT, since you are already on campus, might be a quick way to find out things you will find interesting ....

Expand full comment
author

This is a great lead, thanks! I know of some of the stuff on Bayesian vision models, but not much about how they explicitly model attention. Will take a closer look!

Expand full comment

The magic phrase is 'locus of attention'.

Expand full comment
author

Fantastic, thank you!

Expand full comment
Apr 16Liked by Kevin Dorst

Excellent post. Would like to see it applied to law, e.g. "holdout" jurors. For my part, I tried to create a "simple" Bayesian model of law cases here: https://arxiv.org/abs/1506.07854

Expand full comment