15 Comments
Jul 23Liked by Kevin Dorst

Great post & very clearly written, too.

The mechanism you present seems very clever. However, I wonder if something simpler matters more in practice, namely, biased recall. Suppose you ask me to guess if a coin will come up heads or tails, I make a guess, but you will only flip the coin a year later. A year later, you flip the coin, show me its tails & ask me, “Hey, what did you guess a year ago?”, I’m willing to bet many people, even if they were being fully honest, would say “Tails.”

I’m not familiar with the literature, but there must be some way to differentiate between the Bayesian mechanism with ambiguity you describe vs simple biased recall.

Expand full comment
author

Thanks!

Oh interesting, I thought you were going to go a different direction with that. So two thoughts.

One is that I definitely think memory failures play a huge role in hindsight bias. If I can't remember exactly what my credences were, then the same mechanism would lead me to shift up my estimate once I learn the answer. The answer I'm considering trying out is that ambiguity in the initial judgment might help explain when and why people have worse memory for what their prior judgment was. But to make this point I was going to roll out the example of a fair coin: I was going to say, "Surely if you ask someone today how likely they thought a fair coin was going to land heads last year, they'll say `50%', since they know in general that's how confident you should be in heads before a coin is flipped." If that was right, then ambiguity might still have a sort of priority in the order of explanation.

But that leads to the second point—your question suggests that you think that's not right? I think I agree that it's not right for all sorts of things (like what you thought about an election 6 months ago, etc.), but that for cases where (1) the prior evidence was perfectly clear, and (2) you remember what your prior evidence was, then it's unlikely we'll see much hindsight bias. For example, that seems like it should work first-personally. I'm about to flip a coin at t1. At t2 I see that it lands heads. What, at t2, is my estimate for YOUR credence at t1 that the coin would land heads? Surely 50% (supposing you had an opinion about it—maybe I should've messaged you that I was going to flip it first).

What do you think?

Expand full comment
Jul 24Liked by Kevin Dorst

I think the "coin-flip one year later" example is nice because there's no ambiguity. So, if people still exhibit hindsight bias, that must be because of simpler memory failures. But what would actually happen in practice? I think we need a smart experimental design to know for sure.

Expand full comment
Aug 14Liked by Kevin Dorst

Hey Kevin, interesting post, thanks! This argument seems to me to have some weird implications for how subjects should view their current epistemic position. If you're right that rational agents should commit hindsight bias when they trust themselves and when they're uncertain about their present (or past) probability distribution, then it seems that I, who am now aware of this conclusion, should reason as follows when I am asked my current probability (at t1) for the proposition that Biden is too old to campaign effectively:

"Well, I'm not absolutely certain what I think. If I had to put a number on it, I'd bet 60-40 that Biden is too old. However, I also know that at some later time (t2) evidence will come in that will enable everyone to know whether Biden is too old. And I also know that after that evidence comes in, I'll rationally shift my opinions about what my credences were at t1, such that if it turns out that Biden is too old, I'll believe at t2 that I was 70-30 (or something in that ballpark, depending on how great the hindsight bias shift is) at t1, and if it turns out Biden wasn't too old, I'll believe at t2 that I was 50-50 at t1. So, assuming it's rational to defer to my future self (who after all has better evidence than I do) it seems that I should now believe that my current credence that Biden is too old is either 70 or 50, though I'm not sure which."

There seems to be something very odd about a view which implies that rational agents should regard themselves as having a a credence in P that is either significantly more or significantly less than their current best estimate of the odds of P. What strikes me as possibly even weirder is that they should think that this is the case because their opinions track (are correlated with?) the truth in a manner that is nevertheless not detectable to them. At best, such an agent strikes me as confused; at worst, incoherent. Thoughts? Am I missing something? Maybe your reaction would just be to reject the rationality of deference?

Expand full comment
author

Ha! Very nice. Your reasoning is exactly right that this is an implication—but let me try to convince you that that it's not weird after all.

What we can agree would be definitely weird is having (say) a 60% credence in P, and being certain that in the future you'll rationally have an 80% credence in P. In that case, you don't defer to your future self in a very strong sense: you know (for instance) that your future self will take steep bets on P, but now you don't; you know that your future self will be willing to assert P, but now you don't; etc. This is a violation of what's known as the "Value of Evidence/Information" constraint, which says that you should prefer to outsource your decisions to your future (rational, informed) self. (Variants of this are from IJ Good and Blackwell.)

Now, not quite your case, but something similar. Suppose right now I'm 60% in P, but I know my future self will get some evidence. I'm certain that that evidence will either push him to 80% or 40%. I'm exactly 50-50 between those two possibilities. Then we have a case *kinda* like yours, except that what I'm sure of is that I'm either far above or far below what a future, more-informed version of myself would believe. Yet there's nothing odd about this: I still defer to my future self in various ways, including obey "Reflection" principles: my credence in P is an average of my possible future credences in P.

In fact, this sort of scenario is the kind we're in *whenever* we're uncertain. If I'm 60% in P, then I know that an omniscient version of myself would be either 100% or 0%; indeed I'm 60%-confident that they'd be 100% and 40% confident that they'd be 0%. The trouble is that I'm not sure which! Again, nothing weird here, I think.

But we haven't quite gotten to your case, where all this happens using my CURRENT evidence. Let's have one more warm-up.

Suppose I'm looking at a complicated logic formula P which I know is either a tautology or a contradition. Trouble is, I'm really not sure which. Say that I'm 60% confident it's a tautology, as I set out to write a truth-table. At this point, I know that an IDEAL version of myself (who wasn't so logically/computationally limited) would either be 100% or 0% confident of P. Indeed, I'm 60% in the former and 40% in the latter. So here's a case almost like yours, where I know that an ideal version of myself (with my same evidence) would be either much more or much less confident of P; but I don't know which. I still "defer" to this ideal version of myself in all the usual senses—I want to outsource my decisions to them, etc.

Finally, your case. Forget the ideal version of myself; let the opinions I'm unsure about be what *I* really think. In a context where I'm not sure what I really think, I might *guess* that I'm 60%-confident of P, but leave open that maybe I'm 80% or maybe I'm 40%. I might still defer to my actual credences, whatever they are: if I could outsource my decision to my true probabilities, I would. (I think: conditional on me being 80% in P, P's pretty likely so it's worth taking steep bets on; conditional on me being 40% in P, I's pretty unlikely so it's not worth taking bets on; etc.) Indeed, it's precisely because I have this sort of deference (I think my true opinions are correlated with truth) that I exhibit hindsight bias: learning P makes me think I was more likely to be 80% (and less likely to be 40%) than I originally though!

Note one important difference: we haven't at any point said what my ACTUAL credence in P is. We said what my estimate for it was, i.e. E(Pr(P)), with Pr my probability and E my expectation function. If (unbeknownst to me) in fact Pr(P) = 0.6 = E(Pr(P)), then it is indeed true that my future, hindsight-bias-performing self will have a less accurate estiamte of what my credenec was. But we haven't stipulated that, and of course I dont know that! By assumption, I'm unsure what my true credenec is, so I leave open that Pr(P) = 0.8; in which case doing hindsight bias and shifting E(Pr(P) | P) > 0.6 will make my estimate more accurate. And vice versa in the other direction.

Is that convincing? I'm still working out how exactly to think through / present this stuff, so let me know what you think! And like I said, fantastic question!

Expand full comment

Thanks for taking the time to respond! I'm with you on all the cases up to mine, including the tautology case. The problem case for me is the one where a rational agent works out (using their knowledge of the rationality of hindsight bias) facts about their *current* credence. Let me try to explain what seems problematic about these cases.

First, take the simpler case involving memory loss about what one's credences were at t1. Suppose at t1 I report that my credence in P is .6. (We can stipulate that I have some uncertainty about what my credence actually is, though that doesn't actually matter in this case.) At t2, after learning P, I'm asked what my credence was at t1. Having (faultlessly) forgotten my earlier credence, and rationally believing my credences track the truth, I estimate that my prior credence was .8. No problems here--this seems rational. Call this the memory loss scenario.

Now, the more complicated case in which I remember my earlier credence. Suppose at t1 I report that my credence in P is .6. I have some uncertainty about what my credence actually is -- I think it's possible it's as low as .4, and it's possible it's as high as .8. At t2, after learning P, I'm asked what my credence was at t1, and I'm reminded that I *reported* that it was .6. Relying on my rational belief that my credences track the truth, I estimate that my actual credence at t1 was .8. If I understand your argument correctly, you want to say that my belief at t2 that my credence in P at t1 was .8 is rational--even, that it is rationally required. When confronted with the reminder that I earlier reported a credence of .6, I should remain confident that my actual credence was somewhat higher than that, and that my earlier report was mistaken. Part of what explains the rationality of this behavior is the fact that I was uncertain about my actual credence all along, and the .8 credence that I attribute to my past self at t2 is still within the range of possible credences that I allowed for at t1. Call this the hindsight bias scenario.

Ok, now for the problem. Same set up: I report .6 for P, but think it's possible my actual credence is as low as .4 and as high as .8. I know about hindsight bias, and am convinced by your argument that it is rational. I know that at some future time I will rationally believe that my credence at the present time is either significantly higher or significantly lower than I currently believe it is. I know that this will be the case whether I am in the memory loss scenario or the hindsight bias scenario. So I know that no matter what happens, my future belief about my present credence will be both rational and significantly higher or lower than my current best guess about my actual credence. This knowledge constitutes very good evidence that my present best guess about my actual credence is mistaken. A rational person would take this evidence into account when forming beliefs about their present credences. So, if I am rational, I end up with two seemingly contradictory beliefs about what my credence is: I believe that (a) my current credence in P is (probably) .6, and (b) my current credence in P is (probably not) .6.

This last case is unlike the tautology case you presented, because there, an agent knows that at a later time, their credence in P will be different than it currently is. They don't receive any evidence for thinking that their beliefs about their current credence is mistaken. In this case, the agent receives evidence (in the form of reasoning about what their future rational self would believe) about the accuracy of their beliefs about their current credences. And this evidence seems to require the agent to form a belief about what her current credence is that is incompatible with her present best guess about what her current credence is (which, I take it, is based on a distinct body of evidence.)

Apologies for taking a while to respond, I had to take some time to think about your reply. If you've moved on to other things, that's ok, but if you have time to keep thinking about this, I'd be very interested to know what you think. I can readily imagine that the problem I see would be overcome if I had a better understanding of how to model or think about a person's higher-order beliefs about her credences in conditions of ambiguity.

Expand full comment
Jul 24·edited Jul 24Liked by Kevin Dorst

That's a great post. It took me a while, but I think I've finally figured out where I disagree. Other commenters also referred to something like that.

(Also, I'm not particularly well-versed in probability theory...).

Say, e is "Kevin likes broccoli". Let's take N possible worlds, which are exactly similar except for the value of e and all implications of the latter. And let's say, in half of these worlds, Kevins like broccoli. The true value of e is revealed on 2024.07.20 in this post.

Then, if we ask myselves from these worlds on 2024.07.19 "Does Kevin like broccoli (yes/no)", I assume I'd answer "no" more often in worlds where you don't like broccoli, i.e., there'll be that positive covariance you're talking about. Perhaps, in some of these worlds, broccoli is associated with some radical ideology you dislike, the fact of which I'd pick up and incorporate in my answer.

In this case, indeed, "hindsight bias" is not a bias. If I'm asked today, "On 2024.07.19, would you say that Kevin likes broccoli?", and I'm not quite sure, it makes sense to use e in that judgment, because I'm in a world where you like broccoli, and I'd, on average, answer "yes" more often compared to the worlds where you don't like broccoli.

But there are other strands of "hindsight bias", which I think are still an issue. We can take 100 pundits in this specific world and ask them the same questions:

- On 2024.07.19: "Does Kevin like broccoli?"

- On 2024.07.29: "On 2024.07.19, would you say that Kevin likes broccoli?"

Let's categorize pundits into 4 groups:

1. Both "no"

2. First "yes", second "no"

3. First "no", second "yes"

4. Both "yes"

Pundits in groups 1 and 2 are obviously delusional. But it's also obvious that group 4 is preferable to group 3! And we might as well blame group 3 for "hindsight bias", because they were wrong, but in hindsight, thought they were right. Group 4 would make money on a prediction market; group 3 wouldn't and wouldn't even know why.

And, well, there are usually many more "group 3 pundits" than "group 4 pundits".

I'll try to put it more abstractly. I can be uncertain of something because I don't know what I actually think (or thought), or because there isn't enough knowledge.

In the latter case, I've done what I could with the knowledge I have and produced some P(e) with V(P(e)) > 0.

And the structure of P(e) is a fact about my state of mind at this particular moment, before knowing e. V(P(e)) > 0 is not a failure of introspection. I am uncertain about the value of P(e), maybe I'm uncertain how to best convey my uncertainty, but I'm certain I can't do better.

"Hindsight bias" is the misplaced feeling that I could've done better.

I can't say that my true probability is uncorrelated with the truth; here, there just isn't "my true probability" to begin with. Unlike, returning to broccoli, your brother, who has his true probability, to which I don't have access.

Expand full comment
author

Thanks! I think I'm following. If I understand the end at least, I think one thing you're picking up on is that the cases I'm labeling as "ambiguous" might be heterogenous. There might be

(1) cases [which I have in mind] where you in fact have some underlying probability [or some imprecise probability within some interval], but you're unsure what that true underlying probability is. If so, the reasoning goes through.

But there might also be (2) cases in which there really is no (even semi-precise) fact about what your true opinion is. Rather, you know that you have uncertainty about some underlying quantity. A classic example might be Ellsberg urns: you're told this urn has 10 marbles in it, and that between 2 and 8 are black while the rest are white. Classic analysis is that (you know that) you're in the imprecise state of having the interval [20%, 80%] confidence. Of course, there is an underlying fact (the true proprotion of black marbles) that you're unsure about. But you know (at t1 and t2) that you didn't know that proportion, and instead have this imprecise state. In a case like this, I think I agree—you shouldn't do hindsight bias.

What I do think I'd say, as part of a bigger-picture argument that I'm working on, is that I think a lot of cases in which people want to reach for an imprecise-probability model like (2) are better explained with a precise-but-higher-order-uncertain model like (1). Jennifer Carr has a good paper making a lot of points along this line that I agree with: https://philpapers.org/rec/CARIEW-4

Expand full comment
Jul 21Liked by Kevin Dorst

You're presenting this as involving uncertainty at a time about what your opinion at that very time is. I know you're a fan of modeling that sort of uncertainty, but I don't see why it has to come in here. Even if agents are always certain at t of what they think at t, as long as they're uncertain at t of what they thought at t-n, you can get the phenomenon you're discussing. And that seems pretty reasonable--I don't remember exactly what I thought (even if, back when I thought it, I was certain that I thought it), but I think my earlier opinions were correlated with the truth.

Expand full comment
author

Yeah, that's right. With information loss about what your priors were, the exact same reasoning would work. And of course that's definitely part of what's going on in most cases, like saying what you thought about Biden a month ago. So: point taken!

Obviously I've got an agenda here (for the book project), which is that since hindsight bias also tends to emerge *without* info loss (if people are presented with the new information immediately, or if they are reminded of what their old information was), then higher-order uncertainty can help explain that. (And you get a nice little theorem that HOU is necessary, and—provided positive covariance—sufficient.)

Maybe more interestingly, I think this predicts a contrast between clear and ambiguous cases. Currently working on an experiment that contrasts (1) clear cases of (say) predicting a coin flip of known bias, vs (2) ambiguous cases of (say) predicting whether Jane and Jim (arguing about where to go for dinner) will go to an Indian restaurant. So the idea is that highlighting the role of ambiguity might help explain when and why hindsight bias happens, in cases where the info-loss story is less plausible.

Expand full comment
author

Actually, let me try out another reply to see what you think.

What about the idea that ambiguity in the initial judgment might help *explain* when and why people have worse memory for what their prior judgment was?

People obviously don't remember exactly what their credence was in Biden winning 6 months ago. But—supposing I'd told you 6 months ago that I was going to toss a fair coin today—I would think you *do* remember (or can reconstruct) that 6 months ago you had a 50%-credence that this coin would land heads today. Or if I tell you that my birthday is in July, and then ask you to estimate how likely you thought it was that my birthday was in July last January, then (supposing you're sure I never told you this before), I'm guessing you'll correctly estimate "approximately 1/12".

If that works, ambiguity might be in a sense an important part of the explanation for the imperfect-memory version of the explanation. What do you think?

Expand full comment
Jul 24·edited Jul 25Liked by Kevin Dorst

Yeah I like this a lot (along the lines of your exchange above). If you put a gun to my head and force me to come up with a guess for how likely it is that P, I'll know what I'm saying (so, in some important sense, I have no higher-order uncertainty about what I've offered as a guess), but I may well forget what I said later. Depending on how easy/hard the choice is--clarity vs. ambiguity--I should exhibit more or less hindsight bias. In a case where the guess is clear/obvious, I'll be certain later on about what guess I offered earlier. In a case where it's not, I'll be more uncertain, and so there will be more room for hindsight bias.

Expand full comment

In Bayesian terms, you seem to have in mind a hierarchical model where. as you say, the prior probability over the event of ultimate interest is itself a random variable. In that context, it makes perfect sense to update the lower level priors in the light of new information that changes the model.

Alternatively, you can look at multiple priors models, where precisely this sort of thing happens. Grant, S., A. Guerdjikova, and J. Quiggin. 2018. Ambiguity and Awareness: A Coherent Multiple Priors Model.

Expand full comment
author

Thanks! I didn't know about that 2018 paper, so will check it out! Definitely relevant to this and some broader projects.

The models aren't hierarchical exactly, at least as I understand it. Hierarchical models standardly have one (rigidly specified) probability function—say, your subjective probability function—unsure about another descriptively specified one (say, the objective probabilities or 'chances' as philosophers call them). In the hierarchical context, this latter one is usually the subjective probability function updated on the true cell of a partition (say, upon learning the bias of the coin). In that case, seeing that E happened clearly raises your subjective estimate for the objective probability, in the same way that seeing a coin land heads is evidence that it's biased-heads.

The model in the appendix is slightly (and it turns out, importantly) different. There's just one (descriptively specified) probability function P, which is unsure of its own values. (It's a random function from worlds w to distributions P_w, much like a Harsanyi type space—but without the assumption that you know your own type, so that it's possible that P_w(x) > 0 while P_x ≠ P_w.) And the result is that so long as that function is unsure of its own values and thinks it's correlated with truth, then upon learning E it'll raise it's estimate for *it's own prior* (rather than some other uncertain prior, like the objective chances.)

Here's an example to bring out the contrast.

Case 1: There's a coin of unknown bias, but I know that I have no information about it and so am precisely uniform over the biases—and therefore assign 50% credence to it landing heads at t1. Then I see it land heads. This raises my estimate for the bias of the coin, but doesn't raise my estimate for what MY prior was—I still know that I was exactly uniform over the biases, and so exactly 50% in the first toss landing heads.

Case 2: There's a coin of unknown bias. Anne told me that it's heads-biased while Bill told me it's tails-biased. I'm unsure exactly how to balance their competing testimony (who I trust more, who said it in a more convincing way, etc.), so I'm unsure what I think about the bias of the coin and (therefore) about how likely it is to land heads. I know I'm not too opinionated, but leave open that my credence in heads at t1 might be anywhere between 40–60%. Then I see it land heads. This again is evidence that the coin is biased toward heads. But—if I trust my underlying true opinion—it's ALSO evidence that my prior at t1 was higher in that, i.e. that I trusted Anne more than Bill.

What do you think?

Expand full comment

You can look at this in various ways. I'd do something like the following. Initially, I don't formally consider whether Anne or Bill is more reliable. Once I observe the coin flip, I make this explicit. I assign a higher probability to Anne (whose evidence would suggest 0.6) than to Bill (whose evidence suggest 0.4). The update prior for Heads is therefore greater than 0.5 and the multiple priors have collapsed to a single prior.

Expand full comment