The idea is simple: probabilities don’t make any sense as tools for explaining the brain, they make sense as tools for describing what is literally possible. Describing what is possible and what is likely in the physical universe is awesome, I’m all for it. It’s cool that we can understand particle physics which may or may not bottom-out at pure (weighted) randomness or that we can get good estimates for how late trains are going to be before they even head-out because of our estimation of the situation.
However, people do not think in terms of events this way. People tend to consider a few narratively likely events and then make distinctions between them. Here “narratively likely” means that these events may or may not be realistically likely given the current information, but the agent feels like they are possibilities that require explaining either for internal coherency or social cohesion. Consider a woman who is about to confess her love for someone: there are two options to be explained, does her crush like her back or not? Of course, that’s not true. There’s a lot of spectrum between “yes” and “no” and there are dimensions that are entirely orthogonal: It turns out Mr. Crush has such low self-esteem he simply doesn’t take this seriously and the moment is gone, Ms. Lover doesn’t push further.
In order to make probabilities work for explaining mental models we’ve made mental models very funny. We’ve made them highly definitive where the brain is fuzzy. We’ve made them “instantaneous” instead of having the natural temporal thickness with which humans perceive events: the 2020 election went on and on, did it not?
At its heart, probability is an attempt to make different tradeoffs speak a common language, and I respect that. Probability is often used to describe “likelihood”, but it is really just a calculus of the possible and how that interacts with observation, it has nothing to do with the dimension of “time” that likelihood or causation tend to entail. The problem, though, is that the way the brain models tradeoffs is erratic and contextual, and this makes describing things in probability messy.
Will I be able to finish writing this post before midnight? It’s a clear event that should be well-modeled by probability, but my brain doesn’t view it that way. My brain views things in terms of stakes. What am I going to lose if I don’t do that, and therefore how much energy is it willing to put into making me manic enough to feebly push out my thoughts into words? Stakes always make sense—because resource management is something your brain does by definition. This, in fact, is the heart of Friston’s “Free Energy Minimization” theses. Something that can minimize free energy is organizing, and something that is persisting in a complex steady-state over time must be minimizing free energy.
In order for your brain to maintain its state, it needs to be able to manage resources so that it doesn’t fall out, that’s why it eventually forces you to go to sleep even if you try your hardest not to. In order to play this game the brain does not need to know about probability, but it does need to make tradeoffs. Hardcore Bayesians, like any true theory addict, will say that the probabilities are there implicitly and how can I argue with that? Any system of tradeoffs can be encoded with the basic machinery of probability, but you want the model that’s closest to the representation you’re given. The representation we’re given is behavior which is hard to extract probabilities from. Instead, when we want probabilities we make prediction markets and allow people to use stakes (which is how they actually operate) and then intuit probabilities from these stakes.
Indeed, prediction markets are the perfect example, because they can always be converted to probability since it is assumed that the decision of “who won” will be discrete, atomic, and objective. But what about all the other things we bet on that we couldn’t possibly make a market for? Are there probabilities there? Who cares, it’s time we made a calculus of stakes or forever be renormalizing probabilities because mental models inevitably shift based on saliency and value inflation.