mental-model probability balancescaleiteration transformaccumulateselect cycle generic

Bayesian Updating

mental-model generic

Revise beliefs proportionally to the strength of new evidence, not wholesale. The framework names how people fail to reason under uncertainty.

Transfers

  • revises probability estimates proportionally to the strength of new evidence rather than replacing them wholesale
  • forces consideration of base rates before evaluating the significance of specific observations
  • tracks the magnitude and direction of belief change, making the update itself the interesting output

Limits

  • breaks because the theorem requires precise numerical priors and likelihoods, but humans cannot reliably distinguish between 60% and 75% confidence
  • misleads because two reasoners with different priors reach different conclusions from the same evidence, and the model provides no way to adjudicate starting points

Structural neighbors

Art Is Never Finished, Only Abandoned visual-arts-practice · scale, iteration, transform
Mr. Market social-roles · balance, iteration, select
Hofstadter's Law self-reference · scale, iteration, accumulate
Technical Debt economics · balance, scale, accumulate
Garbage Collection sanitation · iteration, accumulate
The Map Is Not the Territory related
Survival of the Fittest related
Full commentary & expressions

Transfers

A probability theorem reimagined as a discipline of thought. Bayes’ theorem tells you how to revise the probability of a hypothesis given new evidence: multiply your prior belief by the likelihood ratio and normalize. Munger’s version strips the math and keeps the structure — when the facts change, change your mind, and change it proportionally to the strength of the evidence.

Key structural parallels:

  • Prior beliefs are not fixed — you start with a probability estimate, not a certainty. The frame insists that all beliefs are provisional, held with varying degrees of confidence. This alone is a radical departure from how most people argue (as if their positions are either true or false, not 70% likely).
  • Evidence updates, it does not replace — new information shifts your estimate; it does not zero it out. A single data point should not make you abandon a well-supported prior. Conversely, overwhelming evidence should move even the strongest prior. The math enforces proportionality; the mental model asks you to approximate it.
  • Base rates matter — the theorem forces you to consider how common something is before evaluating specific evidence. A positive medical test means very different things depending on the disease’s prevalence. Most human reasoning ignores base rates entirely (Kahneman and Tversky’s base rate neglect). The Bayesian frame makes this error visible.
  • The update is the output — the interesting thing is not your current belief but how much you moved and why. Bayesian thinkers track their deltas, not just their positions.

Limits

  • Humans cannot do the math — the theorem requires precise probability estimates for priors and likelihoods. People cannot reliably distinguish between 60% and 75% confidence. The model asks for quantitative precision that human cognition cannot deliver. “Approximately Bayesian” is the best anyone can do, and the approximation can be wildly off.
  • Prior selection is subjective — the theorem is only as good as your starting point. Two Bayesians with different priors will reach different conclusions from the same evidence. The model provides a rigorous update mechanism but no rigorous way to choose where to start. In practice, priors often encode the biases the model was supposed to correct.
  • It assumes independent evidence — naive Bayesian updating treats each piece of evidence as independent. But real evidence is correlated: five news articles about the same event are not five independent data points. People who “update on everything” often massively overweight correlated information.
  • Unfalsifiable beliefs break the model — if your prior is 0% or 100%, no amount of evidence will move it. Bayes’ theorem multiplies by zero. The model has no mechanism for the radical conversion or paradigm shift — the moment when a prior is not updated but replaced.
  • It can rationalize stubbornness — “I have a very strong prior” is Bayesian language for “I refuse to change my mind.” The framework can be used to justify exactly the rigidity it was designed to prevent.

Expressions

  • “What’s your prior?” — asking someone to state their starting belief explicitly, common in rationalist and effective altruism communities
  • “Update on that” — revise your belief in light of new evidence, used as a verb in tech and finance circles
  • “Strong prior” — high confidence in a belief before seeing evidence, sometimes a euphemism for stubbornness
  • “Base rate neglect” — the specific failure mode the model diagnoses, from Kahneman and Tversky’s work
  • “When the facts change, I change my mind” — attributed to Keynes (probably apocryphal), the informal version of Bayesian updating
  • “Extraordinary claims require extraordinary evidence” — Sagan’s razor, a Bayesian insight about the relationship between prior improbability and the evidence needed to overcome it

Origin Story

Thomas Bayes, an 18th-century Presbyterian minister, described the theorem in “An Essay towards solving a Problem in the Doctrine of Chances,” published posthumously in 1763. Laplace independently developed the same ideas more rigorously. For two centuries, Bayesian statistics was a minority position in a field dominated by frequentism (Fisher, Neyman, Pearson).

Munger absorbed Bayesian thinking not from statistics textbooks but from its application in decision theory and investing. His version emphasizes the behavioral challenge: humans anchor too heavily on priors and under-update on evidence, or they over-update on vivid anecdotes and ignore base rates. The model’s value is less in its mathematics than in its diagnosis of how people actually fail to reason under uncertainty.

The rationalist community (Yudkowsky, LessWrong) later elevated Bayesian updating to something approaching an identity marker — “aspiring rationalist” became synonymous with “person who tries to think Bayesianly.” This popularization both spread the concept and made it vulnerable to the stubbornness failure mode described above.

References

  • Bayes, T. “An Essay towards solving a Problem in the Doctrine of Chances,” Philosophical Transactions (1763)
  • Kahneman, D. & Tversky, A. “On the Psychology of Prediction,” Psychological Review (1973) — base rate neglect
  • Munger, C. “Academic Economics: Strengths and Faults After Considering Interdisciplinary Needs,” Herb Kay Undergraduate Lecture, UCSB (2003)
  • McGrayne, S.B. The Theory That Would Not Die (2011) — history of Bayesian statistics and its contentious acceptance
balancescaleiteration transformaccumulateselect cycle

Contributors: agent:metaphorex-miner