Classical epistemology presumes that a rational agent maintains a single, globally coherent credence function Cr: Ω → [0,1] over a Boolean algebra of propositions. Under this picture, probabilistic incoherence constitutes a failure of rationality, subject to Dutch book exploitation. Yet empirical agents—human and artificial alike—routinely exhibit contextually inconsistent beliefs that resist integration. The physicist who accepts quantum indeterminism in the lab but deterministic intuitions at the dinner table is not obviously irrational; she may simply be fragmented.
The challenge is to construct formal frameworks that accommodate this phenomenon without collapsing into anything-goes subjectivism. David Lewis gestured at fragmentation in his analysis of implicit belief, and Agustín Rayo has developed the notion in more recent work. What has been missing is a rigorous probabilistic treatment that specifies how fragments are individuated, when they activate, and under what conditions their non-integration is rationally permissible rather than merely descriptively accurate.
This article develops such a framework. We replace the single credence function with a family {Cr_i}_{i∈I} indexed by contexts, tasks, or computational modules, each internally coherent but globally inconsistent. We then examine when this architecture is not merely tolerable but optimal—when the computational cost of integration exceeds its epistemic benefit. The result is a formal model of bounded rationality that takes seriously the resource constraints under which actual cognition operates.
Fragmented Credence States
Let an agent's epistemic state be represented not by a single probability measure but by a tuple ⟨I, {Cr_i}, A⟩, where I is an index set of fragments, each Cr_i: F_i → [0,1] is a credence function defined over a sub-algebra F_i ⊆ F of propositions, and A is an activation function mapping contexts c ∈ C to subsets of I. Crucially, there is no requirement that Cr_i(p) = Cr_j(p) when p ∈ F_i ∩ F_j.
This formalism captures several phenomena simultaneously. First, partiality: different fragments concern different subject matters, so F_i need not equal F_j. Second, inconsistency: even on shared propositions, fragments may disagree. Third, contextual activation: the function A determines which credences are epistemically 'live' in a given deliberative context.
The individuation of fragments is not arbitrary. Natural candidates include task-specific modules (spatial reasoning, social cognition, formal mathematics), memory-indexed beliefs keyed to retrieval cues, and distinct conditional probability tables in a factored Bayesian network. Each fragment inherits its own update rule—typically standard conditionalization restricted to F_i—but cross-fragment updating requires additional machinery.
The global state permits local Dutch book immunity within each fragment while tolerating global incoherence. An adversary can exploit the agent only by constructing bets that span fragments simultaneously, which requires the agent to activate multiple fragments in a single deliberative context. If activation is constrained by cognitive architecture, such exploitation may be practically unrealizable.
This yields a probabilistic generalization of Stalnaker's compartmentalized belief and connects to hierarchical Bayesian models in machine learning, where distinct inference modules operate with locally coherent but globally unreconciled parameters.
TakeawayA mind is not necessarily a single ledger of beliefs—it may be a federation of ledgers, each internally balanced but unaudited against the others. Rationality within fragments is cheap; rationality across them is expensive.
When Fragmentation Is Rational
The orthodox view treats any deviation from global coherence as a defect. But this presupposes that integration is costless—a premise that fails for any physically realized cognitive system. Once we introduce computational bounds, fragmentation can emerge as the solution to an optimization problem rather than its failure.
Consider an agent with bounded working memory of capacity k and a proposition space of size n ≫ k. Maintaining a joint distribution over all 2^n possible worlds is intractable; even factored representations require inference that scales poorly. A fragmented architecture partitions the space into locally tractable sub-problems, sacrificing global coherence for computational feasibility. The question becomes: what is the optimal partition?
Formally, define the expected utility of an epistemic architecture as EU(A) = E[V(decisions) − C(inference)], where V is task value and C is computational cost. Global coherence maximizes V but incurs prohibitive C. Complete fragmentation minimizes C but sacrifices cross-domain inference. The rational architecture is the one maximizing EU subject to the agent's resource constraints.
This reframes ideal rationality as a limit case appropriate only to logically omniscient agents with unbounded resources. For any agent with finite cognitive capacity, some degree of fragmentation is not just permissible but required. The Bayesian norm 'maintain coherent credences over all propositions' becomes analogous to the economic advice 'compute utilities over all possible actions'—aspirational but computationally incoherent.
Task-specificity provides a second rationale. Fragments specialized for particular inference problems can encode domain-specific priors and likelihoods that would be diluted or distorted by integration. A fragment tuned for causal reasoning in physics need not reconcile with one tuned for Bayesian inference about social intentions; their integration might produce worse performance on both tasks.
TakeawayIdeal rationality is a fiction appropriate to Laplacean demons, not to minds. For bounded agents, some incoherence is not a bug but an optimization.
Integration Costs
If fragmentation is rational under bounds, we need a formal account of what integration costs—both to characterize when it is worth paying and to delineate the shape of bounded rationality. Let the integration operation ⊕ take fragments Cr_i and Cr_j and produce a unified credence Cr_ij over F_i ∪ F_j. The cost C(⊕) decomposes into three components.
First, reconciliation cost: when fragments disagree on shared propositions, integration requires adjudication—typically a weighted averaging or more sophisticated pooling operation. The computational cost scales with |F_i ∩ F_j| and the divergence KL(Cr_i ∥ Cr_j). High-disagreement fragments are expensive to merge.
Second, joint distribution cost: even when fragments agree on marginals, constructing a joint distribution requires specifying dependencies, with complexity that can scale exponentially in the combined variable set. This is the bottleneck that factored Bayesian networks and probabilistic graphical models exist to manage.
Third, maintenance cost: an integrated credence must be updated coherently under new evidence, which propagates through the entire joint distribution. Belief propagation in dense networks is NP-hard in the worst case; approximate methods trade accuracy for tractability. Fragments, by contrast, update locally and cheaply.
These costs yield a formal bounded-rationality principle: integrate fragments i and j when and only when E[ΔV(integration)] > C(⊕_ij). This provides a precise successor to Simon's satisficing and Cherniak's minimal rationality—not a loose heuristic but a computable criterion. It also predicts when rational agents will resist integration, even under probing: when the epistemic gains do not justify the architectural overhead.
TakeawayCoherence is not free. The question is never whether to be coherent, but where to spend your coherence budget.
Belief fragmentation is not a pathology to be diagnosed away but a structural feature of any cognitive system operating under realistic constraints. The formal framework developed here—indexed credence families with contextual activation and cost-sensitive integration—provides the machinery to analyze when fragmentation is rational, how it should be modeled, and what its computational signature looks like.
This reorientation has implications beyond traditional epistemology. In artificial intelligence, it suggests that modular architectures with locally coherent but globally unreconciled components may be not merely practical compromises but principled designs. In cognitive science, it offers a normative vocabulary for phenomena long observed but poorly theorized.
The deeper lesson is that rationality cannot be evaluated independently of architecture. The question 'is this agent rational?' presupposes answers to 'under what resource constraints?' and 'against what task distribution?' Formal epistemology advances by taking these questions seriously—and by building the mathematics to answer them.