Bayesian epistemology rests on a deceptively simple imperative: when you learn that proposition E is true, update your credences by conditionalizing on E. Your new probability for any hypothesis H becomes your old conditional probability P(H|E). This procedure has dominated formal epistemology for decades, praised for its Dutch book arguments and its elegant mathematical properties.
Yet strict conditionalization harbors a troubling assumption. It requires that evidence arrive with certainty—that upon learning E, your credence in E jumps immediately to 1. But evidence rarely works this way. Perceptual experiences provide probabilistic support without guaranteeing truth. Testimony shifts our confidence by degrees. Even scientific instruments report measurements with error bars, not binary verdicts.
This gap between the idealized certainty conditionalization demands and the uncertain evidence we actually receive has spawned rival update procedures. Jeffrey conditionalization handles cases where experience shifts your probability over a partition without pushing any element to certainty. Lewis's imaging procedure preserves similarity structures that matter for counterfactual reasoning. Each captures something conditionalization misses. The formal epistemologist's task is not to declare a universal winner but to understand precisely when each procedure applies—to develop what we might call a meta-epistemology of update rules that matches formal tools to evidential contexts with mathematical precision.
The Rigidity Problem: Handling Uncertain Evidence
Consider a perceptual experience that makes a colored object appear blue to you. Strict conditionalization requires identifying some proposition E that you now believe with certainty. Perhaps E is 'the object appears blue to me.' But this moves the uncertainty rather than eliminating it. The connection between appearance and reality remains probabilistic, and you still face uncertain evidence about the object's actual color.
Richard Jeffrey identified this as a fundamental limitation of standard Bayesian updating. His solution—now called Jeffrey conditionalization—allows experience to directly shift your probabilities over a partition {E₁, E₂, ..., Eₙ} without requiring certainty about any element. If your visual experience changes your credence in 'the object is blue' from 0.3 to 0.7 while leaving other color probabilities proportionally adjusted, Jeffrey conditionalization propagates this shift through your entire belief state.
The formal rule is elegant. For any hypothesis H: P_new(H) = Σᵢ P_old(H|Eᵢ) × P_new(Eᵢ). Your new credence in H is a weighted average of your old conditional credences, where the weights are your new credences in the partition elements. The conditional probabilities P(H|Eᵢ) remain fixed—this is Jeffrey's rigidity constraint—while only the unconditional probabilities over the partition shift.
This rigidity constraint is both the power and the limitation of Jeffrey's approach. It assumes that your experience provides information about the partition {Eᵢ} without providing information about how H relates to each Eᵢ. When a thermometer reading shifts your credence that the temperature is between 20-25°C, it typically doesn't change your beliefs about what would follow from various temperature ranges. The conditional structure remains stable while the unconditional distribution updates.
But rigidity can fail. Some experiences provide entangled information—shifting both your credence in E and your assessment of how E bears on H. A detective might simultaneously learn that a suspect was present at the scene and discover new information about what presence at the scene implies about guilt. Here Jeffrey conditionalization's rigidity assumption breaks down, demanding either iterative application or recognition that the situation requires different formal treatment altogether.
TakeawayWhen evidence shifts your confidence without producing certainty, Jeffrey conditionalization preserves the conditional structure of your beliefs while updating unconditional probabilities—but only when your experience leaves the evidential relevance relations themselves untouched.
Imaging and Counterfactuals: Preserving Similarity
David Lewis introduced imaging as an alternative update procedure motivated by counterfactual reasoning. Where conditionalization redistributes probability mass among worlds where E holds, imaging transfers each world's probability to its closest E-world. The difference is subtle but profound for evaluating counterfactuals.
Consider the probability of 'if this match were struck, it would light.' Conditionalization on 'the match is struck' eliminates all unstruck-match worlds and renormalizes over struck-match worlds. But this includes worlds where the match is struck in a vacuum, or underwater, or after being soaked—worlds that were already in your probability space but may not represent what would happen if striking occurred. Imaging instead moves each unstruck-match world's probability to its closest struck-match counterpart, preserving relevant background conditions.
Formally, let S(w,E) denote the closest E-world to w according to a similarity ordering. Imaging on E yields: P_image(H) = Σ_w P(w) × 1 if S(w,E) ∈ H, 0 otherwise. Each world w contributes its probability to the hypothesis H only if w's closest E-world satisfies H. This preserves similarity structure in ways conditionalization cannot.
The choice between conditionalization and imaging tracks different questions we might ask. 'What should I believe given that E?' often calls for conditionalization—E represents evidence about the actual world, and we want to update on learning actuality. 'What would be the case if E were true?' calls for imaging—we want to evaluate a counterfactual while holding fixed as much of the actual situation as E permits. Confusing these questions leads to systematic errors in probabilistic reasoning about hypotheticals.
Lewis's imaging also connects to interventionist approaches in causal inference. When Judea Pearl distinguishes observational conditioning P(H|E) from interventional conditioning P(H|do(E)), he captures a similar distinction. Observing that E holds is one kind of evidence; making E hold through intervention is another. Imaging provides a possible-worlds semantics for intervention that parallels Pearl's graphical approach, revealing deep structural connections between counterfactual logic and belief revision.
TakeawayUse conditionalization when evidence tells you about actuality; use imaging when evaluating counterfactuals or interventions where similarity to the current situation matters more than probability mass among worlds where the antecedent already held.
Context-Sensitive Updating: Matching Tools to Evidence
No single update rule governs all epistemic contexts. The formal epistemologist must develop sensitivity to which procedure fits which evidential situation. This requires analyzing both the structure of incoming evidence and the purpose of the update.
Three diagnostic questions guide the choice. First: does the evidence produce certainty about some proposition? If yes, strict conditionalization applies with its full Dutch book justification. If no—if experience merely shifts probabilities without anchoring any proposition at 1—Jeffrey conditionalization or its generalizations become necessary. Second: does the update concern what is the case or what would be the case? Indicative learning calls for conditionalization; counterfactual or interventional reasoning often requires imaging. Third: does the evidence affect only unconditional probabilities, or does it also revise conditional probability relationships? Jeffrey conditionalization's rigidity assumption handles the former; the latter may require more complex revision procedures.
Practical applications abound in AI research and decision theory. A medical diagnosis system receiving probabilistic test results should often use Jeffrey conditionalization—the test shifts confidence in disease presence without producing certainty. A planning system evaluating hypothetical actions should use imaging or interventional conditioning—asking what would happen if an action were performed rather than what to believe given that it was performed. Conflating these leads to systematic errors like confounding in causal inference.
The meta-epistemological insight is that update rules are tools, not universal laws. Just as different statistical tests suit different data structures, different update procedures suit different evidential situations. Mathematical rigor lies not in dogmatic adherence to one rule but in precise characterization of each rule's domain of applicability.
This pluralism about update rules extends to their justifications. Dutch book arguments strongly support conditionalization for certain evidence. Accuracy-based arguments favor different rules under different loss functions. Representation theorems connect update rules to underlying assumptions about evidence structure. The sophisticated formal epistemologist commands multiple justificatory frameworks, understanding which arguments apply where and why the formal landscape admits genuine alternatives rather than a single correct procedure.
TakeawayBefore updating beliefs, diagnose your evidential context: certainty versus probability shifts, indicative versus counterfactual questions, and whether conditional relationships themselves are under revision—then select the formal tool fitted to that specific structure.
Conditionalization, Jeffrey conditionalization, and imaging represent not competing dogmas but complementary tools in the formal epistemologist's toolkit. Each captures genuine features of rational belief revision under different evidential conditions. The apparent rivalry dissolves once we recognize that they answer different questions.
The deeper lesson concerns formalization itself. Mathematical precision does not mean finding the one true rule. It means characterizing precisely when each rule applies, understanding the assumptions that justify each procedure, and developing sensitivity to evidential structure that guides appropriate tool selection.
For researchers in epistemology, cognitive science, and artificial intelligence, this pluralistic framework provides actionable guidance. Match your update procedure to your evidential context. Respect the differences between learning facts and evaluating counterfactuals. Let mathematical rigor serve philosophical clarity rather than obscuring the genuine complexity of rational belief revision.