One of the oldest questions in formal epistemology is deceptively simple: why should rational agents have credences that obey the probability axioms? The Dutch Book argument offers one classic answer—violating probability means you can be exploited by a clever bookie. But many epistemologists find this unsatisfying. Rational belief shouldn't depend on betting scenarios. We need a purely epistemic justification, one that speaks to the cognitive goal of getting things right.
Enter the accuracy-dominance framework, developed most rigorously by James Joyce and later refined by Richard Pettigrew. The core claim is elegant and powerful: any credence function that violates the probability axioms is accuracy-dominated by some coherent credence function. That is, there exists a probabilistically coherent alternative that is guaranteed to be closer to the truth, no matter how the world turns out. Incoherence is not just risky—it is gratuitously inaccurate.
This result has become one of the most discussed foundations for Bayesianism in contemporary epistemology. It promises to ground probabilistic coherence in a purely epistemic value—accuracy—without invoking pragmatic considerations about bets or decisions. But the argument's force depends on substantive assumptions about how we measure accuracy, what counts as dominance, and whether the framework extends cleanly to imprecise credences. Each of these deserves careful scrutiny. Let us examine the formal architecture, its philosophical interpretation, and the boundary cases where the argument's reach remains contested.
The Dominance Theorem: Incoherence Is Gratuitous Inaccuracy
The formal backbone of the accuracy-dominance program is a theorem relating proper scoring rules to probabilistic coherence. A scoring rule is a function that measures the distance between a credence function and the truth—understood as the omniscient credence function that assigns 1 to all truths and 0 to all falsehoods. A scoring rule is strictly proper if, according to its own lights, the expected inaccuracy of a credence function is minimized when the agent's credences are probabilistically coherent.
The central result, originally established by Joyce (1998) and given a more general treatment by Pettigworth (2016), can be stated precisely. Let c be a credence function over a finite partition {w1, …, wn}. If c violates the probability axioms—that is, if the values assigned to the partition do not sum to 1, or if some value lies outside [0, 1]—then for every strictly proper scoring rule S, there exists a probabilistically coherent credence function c* such that S(c*, wi) ≤ S(c, wi) for all i, with strict inequality for at least one world.
What does this mean concretely? Consider the Brier score, the most commonly used strictly proper scoring rule: S(c, w) = Σ(c(Xj) − w(Xj))². Geometrically, the set of coherent credence functions forms a simplex in ℝn. The Brier score measures squared Euclidean distance from the vertices of that simplex. The dominance theorem then follows from the fact that the nearest point on the simplex to any point outside it is strictly closer to every vertex than the original point is.
This geometric intuition generalizes. For any strictly proper scoring rule satisfying certain regularity conditions—continuity, strict propriety, and what Pettigrew calls additivity—the dominance result holds. The proof leverages the convexity of the set of coherent credence functions and the mathematical properties that strict propriety imposes on the scoring rule's gradient structure. The key technical insight is that strict propriety ensures no coherent credence function is itself dominated, making coherence both necessary and sufficient for avoiding dominance.
The theorem's strength lies in its state-independence. The coherent alternative c* is better in every possible world. This is not a claim about expected accuracy—it is an outright dominance claim. No matter what turns out to be true, you would have been more accurate had you been coherent. This makes the argument strictly stronger than expected-accuracy arguments and gives it a decision-theoretic flavor without requiring any particular decision theory.
TakeawayProbabilistic incoherence is not a subtle flaw—it is a structural guarantee that you could have been more accurate in every possible state of the world, for free.
Philosophical Interpretation: Does Accuracy Trump Everything?
The formal result is clean. The philosophical interpretation is not. The dominance theorem establishes that incoherent credences are accuracy-dominated, but does this suffice as a norm of rationality? Several substantive philosophical questions intervene between the mathematical result and the normative conclusion that credences ought to be probabilities.
First, there is the choice of scoring rule. The theorem requires a strictly proper scoring rule, but there are infinitely many such rules—the Brier score, the logarithmic score, the spherical score, and parametric families connecting them. Different scoring rules can yield different dominating alternatives for the same incoherent credence function. If accuracy is supposed to ground a unique rational requirement, the pluralism of scoring rules is troubling. Pettigrew addresses this partly through additional axioms—notably additivity and continuity—that constrain the class of admissible scoring rules. But these axioms themselves require justification, and some epistemologists have argued that the choice of scoring rule smuggles in substantive epistemic commitments that the accuracy framework was supposed to derive.
Second, there is the question of whether accuracy is the only epistemic value. The dominance argument works precisely because it isolates a single measure of epistemic goodness. But epistemologists like William Talbott and Hilary Greaves have asked whether other epistemic virtues—informativeness, calibration, explanatory power—might sometimes conflict with pure accuracy. If they do, the dominance argument shows only that incoherence is accuracy-dominated, not that it is epistemically dominated all things considered. Joyce's response is that accuracy is constitutive of the epistemic enterprise in a way that other values are not: credences just are the sort of thing that aim at truth, and accuracy measures how well they succeed. This is a defensible position, but it is a philosophical thesis, not a mathematical consequence.
Third, the dominance argument faces the content-independence objection. The dominating coherent credence function c* is typically constructed as the nearest coherent function to the agent's actual credences. But there is no guarantee that c* respects the agent's evidence or epistemic history. It might assign high credence to propositions the agent has strong evidence against. The accuracy-dominance argument says: adopt c* or something even better. It does not say: adopt c* specifically. This raises questions about whether the argument establishes the right kind of normative force—it tells you that some coherent function dominates you, but not which one to adopt.
Finally, there is a deep question about the normativity of dominance reasoning itself. The argument assumes that if option A dominates option B in every state, a rational agent should prefer A. This principle seems hard to deny in decision theory. But translating it to epistemology requires treating credences as choices subject to an accuracy-consequentialism, which some epistemologists resist. Deontological epistemologists, for instance, may hold that credences should be governed by rules about evidence-responsiveness, not by consequentialist scoring. The accuracy-dominance program is thus committed to a broadly consequentialist metaepistemology—a commitment that is philosophically significant and not universally shared.
TakeawayThe mathematics of accuracy-dominance is compelling, but the bridge from formal dominance to epistemic obligation rests on philosophical commitments about what matters in belief—commitments that must be defended, not merely assumed.
Extension to Imprecision: Where the Framework Strains
One of the most active frontiers of the accuracy-dominance program concerns imprecise credences—the view that rational agents should sometimes be represented not by a single probability function but by a set of probability functions (a credal set). This view, defended by Isaac Levi, Peter Walley, and more recently by Susanna Rinard, is a natural response to situations of deep uncertainty where no single probability seems uniquely warranted.
The question is whether accuracy-dominance arguments support or undermine imprecise credences. The results here are strikingly sensitive to axiom choice. Pettigrew (2016) argues that, given certain natural assumptions about how to score imprecise credences, accuracy considerations favor precise credences. His argument proceeds by defining the accuracy of a credal set as the worst-case accuracy among its members—a minimax approach. Under this definition, any imprecise credal set is dominated by some member of that set. The agent would be better off, in the worst case, simply committing to one of the probability functions already in their set.
However, this result depends critically on the evaluation rule for credal sets. If instead we adopt a maximality criterion—where a credal set is permissible as long as no precise credence dominates every member of the set—then imprecise credences survive. Mayo-Wilson and Wheeler (2016) show that under alternative scoring regimes, imprecise credences are not accuracy-dominated. The formal landscape bifurcates: your view on whether imprecision is rational depends on which meta-level decision rule you adopt for evaluating sets of credence functions.
This sensitivity reveals something important about the limits of accuracy-first epistemology. The dominance theorem for precise credences succeeds because the mathematical structure—convexity of the probability simplex, properties of strictly proper scoring rules—is well-behaved. When we move to credal sets, we introduce set-valued objects where dominance, distance, and accuracy become ambiguous. Different formalizations encode different philosophical intuitions about what imprecision means and what it is for. The mathematics alone cannot adjudicate between these interpretations.
The imprecision debate thus illuminates a broader methodological point. Formal epistemology gains its power from making philosophical commitments mathematically precise. But the choice of formalization is itself a philosophical act. Accuracy-dominance arguments are strongest when the formal framework is constrained enough to yield unique results—as in the precise-credence case. When the framework admits multiple legitimate formalizations, the arguments become conditional: if you formalize imprecision this way, then accuracy favors precision. The philosophical question of which formalization is correct must be answered on grounds that the formal framework itself cannot provide.
TakeawayWhen accuracy-dominance arguments are extended to imprecise credences, the results fracture along axiom lines—revealing that the power of formal methods depends on philosophical choices made before the first equation is written.
The accuracy-dominance argument for probabilistic coherence is among the most elegant results in formal epistemology. It offers a purely epistemic justification for Bayesianism—one that avoids Dutch Books, pragmatic entanglements, and appeals to intuition. The mathematical core is robust: for any strictly proper scoring rule, incoherent credences are gratuitously inaccurate.
Yet the argument's philosophical force is not exhausted by the theorem. It depends on commitments about the primacy of accuracy, the normativity of dominance reasoning within epistemology, and the choice of formal apparatus. These are substantive positions that deserve—and have received—serious challenge.
The extension to imprecise credences reveals the framework's boundaries most clearly. Where the mathematics underdetermines the result, philosophy must lead. The accuracy-dominance program is not a closed proof but an ongoing research program—one that exemplifies both the power and the inherent limitations of bringing formal methods to bear on questions about rational belief.