What makes a belief good? Not morally good, not practically useful—epistemically good. The question sounds simple, but it conceals a formal architecture that has reshaped how we think about rationality itself. Epistemic utility theory proposes a stark answer: the sole epistemic good is accuracy. A belief's value is entirely determined by how close it comes to the truth. Everything else—coherence, simplicity, explanatory power—matters only insofar as it serves accuracy.
This framework borrows the machinery of decision theory and transplants it into a purely epistemic domain. Just as practical rationality can be modeled as maximizing expected utility over outcomes, epistemic rationality can be modeled as maximizing expected epistemic utility—where the utility function measures proximity to truth. The move is elegant, but it carries significant philosophical weight. It commits us to a form of epistemic consequentialism: the right credences are those that best promote the epistemic good, evaluated by their expected consequences for accuracy.
The project is not merely formal bookkeeping. It yields substantive results. Scoring rule arguments can derive probabilism—the thesis that rational credences satisfy the probability axioms—rather than simply assuming it. They can ground conditionalization as the uniquely rational update rule. But the framework also faces pointed objections: cases where maximizing expected accuracy seems to license intuitively irrational epistemic strategies. Understanding both the power and the limits of epistemic utility theory is essential for anyone working at the intersection of formal epistemology, Bayesian reasoning, and the foundations of rational belief.
Accuracy as Sole Epistemic Value
Epistemic utility theory begins with veritism: the thesis that accuracy is the sole fundamental epistemic value. To be precise, what we value is credences—degrees of belief—that are close to the truth values of the propositions they concern. If proposition p is true, the ideal credence in p is 1; if false, 0. Every departure from this ideal represents epistemic loss.
The formal apparatus rests on scoring rules—functions that assign a numerical score to a credence given the actual truth value of the proposition. The most widely discussed is the Brier score, defined as the squared Euclidean distance between the credence function and the truth-value function: for a single proposition, if your credence is c and the truth value is v ∈ {0,1}, your inaccuracy is (c − v)². Summing over a partition yields a measure of total inaccuracy. The lower the score, the more accurate the credence function.
Not just any scoring rule will do. The critical property is strict propriety. A scoring rule is strictly proper if and only if, for any probability function p, the expected inaccuracy as calculated by p itself is uniquely minimized by adopting p as your credence function. This means no probabilistically coherent agent can expect to do better, by their own lights, by adopting a different credence function. The Brier score and the logarithmic score are strictly proper; simple absolute-value distance is not.
Why does strict propriety matter so profoundly? Because it enables the central argument for probabilism. Joyce's (1998) accuracy-dominance argument, later refined by Predd et al. (2009) and Pettigrew (2016), shows that if your credence function violates the probability axioms, then there exists a probabilistically coherent credence function that is closer to the truth no matter what the truth turns out to be. Your incoherent credences are accuracy-dominated. Under any strictly proper scoring rule, incoherence guarantees that you could do strictly better in every possible world. This is not a pragmatic argument about betting losses—it is a purely epistemic argument about proximity to truth.
The elegance here is that probabilism is derived rather than stipulated. We do not begin by assuming that rational credences must be probabilities. We begin with the single value judgment—accuracy matters—and a minimal structural requirement on how accuracy is measured. The probability axioms emerge as theorems. This represents a significant advance over Dutch Book arguments, which rely on pragmatic considerations about betting behavior, and over direct appeals to intuition about coherence. The foundation is squarely epistemic.
TakeawayIf you accept that accuracy is the sole epistemic good and adopt any strictly proper measure of it, probabilism is not an assumption—it is a consequence. The probability axioms are forced on you by the geometry of truth-closeness itself.
Epistemic Consequentialism
With accuracy as the currency, epistemic utility theory adopts a consequentialist structure: the right credences are those that best promote the epistemic good. Specifically, a rational agent should adopt the credence function that maximizes expected epistemic utility—or equivalently, minimizes expected inaccuracy—relative to their current credal state. This is epistemic consequentialism in its purest form.
The framework yields more than probabilism. Consider conditionalization. Suppose you learn that E is true. Greaves and Wallace (2006) showed that, under strictly proper scoring rules, the update rule that minimizes expected inaccuracy from the standpoint of your prior is precisely Bayesian conditionalization: setting your new credence function to p(· | E). No other update rule—not Jeffrey conditionalization for uncertain evidence in this context, not any ad hoc revision—scores as well in expectation. The result is powerful: conditionalization is the uniquely optimal updating strategy when evaluated by the epistemic consequentialist's own standard.
This extends to other epistemic norms. The Principal Principle—the norm that your credences should defer to known objective chances—can be given an accuracy-based vindication. If the objective chance of p is x, then setting your credence to x minimizes expected inaccuracy when expectations are calculated using the chance function. Similarly, certain norms of evidence-gathering and epistemic caution receive consequentialist justification: you should gather evidence when doing so is expected to improve accuracy, and you should be epistemically humble when your evidence is thin.
The parallels with practical decision theory are deliberate and illuminating. In practical rationality, an agent faces a decision problem: a set of acts, a set of states, and a utility function. The rational act maximizes expected utility. In epistemic rationality, the "acts" are credence functions, the "states" are possible worlds (which fix truth values), and the utility function is the scoring rule. The formal isomorphism allows us to import powerful results from decision theory—including representation theorems, dominance reasoning, and minimax strategies—directly into epistemology.
But the isomorphism also imports tensions. Practical consequentialism is famously permissive about which acts promote the good: sometimes lying maximizes welfare. Does epistemic consequentialism similarly license adopting credences that are locally irrational if they lead to greater accuracy downstream? This is the question that generates the deepest objections to the framework, and it is the question to which we now turn.
TakeawayEpistemic consequentialism treats choosing credences like choosing actions: pick the ones with the best expected epistemic outcome. This decision-theoretic lens unifies probabilism, conditionalization, and deference to chance under a single maximization principle.
Problems and Alternatives
The most incisive objection to epistemic consequentialism comes from cases of epistemic trade-offs. Consider: suppose an evil demon will make you massively inaccurate about thousands of propositions unless you adopt an irrational credence in one specific proposition right now. The epistemic consequentialist seems forced to say you should adopt the irrational credence—it maximizes expected accuracy overall. But this is deeply counterintuitive. Rationality, many argue, should not require you to be irrational about p in order to be accurate about unrelated propositions q, r, s.
Selim Berker (2013) pressed this line forcefully, arguing that epistemic consequentialism collapses into a structure where the epistemic ends justify the epistemic means. Just as act-consequentialism in ethics can demand that you commit a murder to prevent five murders, epistemic consequentialism can demand that you believe irrationally to prevent greater inaccuracy. Berker contends this reveals a fundamental flaw: epistemic rationality has a deontological character that resists consequentialist reduction. The rational response to evidence is determined by the evidence itself, not by downstream accuracy consequences.
Defenders of epistemic utility theory have several responses. One is to restrict the framework to synchronic evaluation: what matters is the expected accuracy of your credence function at this moment, not its causal consequences for future accuracy. Under this reading, the demon case is irrelevant because adopting the irrational credence is less accurate now, even if it prevents future inaccuracy. The rational credence at time t is the one that minimizes expected inaccuracy at t. This move blocks the trade-off objection but raises new questions about how to handle diachronic epistemic evaluation.
Another response invokes dominance reasoning rather than expected utility maximization as the primary argumentative tool. The accuracy-dominance argument for probabilism does not require calculating expectations at all—it shows that incoherent credences are dominated in every possible world. This sidesteps the consequentialist framing entirely. Pettigrew (2016) has developed a sophisticated version of epistemic utility theory that leans heavily on dominance, reducing the framework's dependence on expectation-based reasoning and thereby blunting the force of trade-off objections.
Alternative frameworks exist. Epistemic deontology holds that certain epistemic norms—believe in accordance with your evidence, proportion belief to evidential support—are binding regardless of accuracy consequences. Process reliabilism focuses on the reliability of belief-forming mechanisms rather than the accuracy of individual credal states. And epistemic virtue theory locates epistemic value in character traits like intellectual humility and open-mindedness. Each offers resources that pure accuracy-maximization may lack. The most promising path forward may be a pluralism that retains the formal power of scoring-rule arguments while acknowledging that accuracy, though central, does not exhaust the space of epistemic evaluation.
TakeawayThe deepest challenge for epistemic utility theory is whether good epistemic ends can justify bad epistemic means. How you answer determines whether rationality is fundamentally about consequences for accuracy or about responding correctly to the evidence you actually have.
Epistemic utility theory offers a remarkable unification: from a single value—accuracy—and a formal measurement constraint—strict propriety—it derives the core norms of Bayesian epistemology. Probabilism, conditionalization, and chance deference emerge not as axioms but as theorems. The elegance is genuine and the results are substantive.
Yet the framework's consequentialist architecture creates pressure points that cannot be dismissed. Trade-off cases expose a tension between maximizing accuracy globally and responding rationally to local evidence. Whether the solution lies in synchronic restriction, dominance reasoning, or a more pluralistic account of epistemic value remains an open and generative question.
What is not in doubt is the productivity of the formal approach itself. By making epistemic value precise and measurable, scoring-rule epistemology transforms vague intuitions about rational belief into claims that can be rigorously evaluated, compared, and—where they fail—clearly diagnosed. The mathematics does not replace philosophical judgment, but it disciplines it.