Consider an automated reasoning system that has derived, through impeccable logical inference, that a particular route is optimal for delivery. New sensor data arrives indicating road closure. The system's previous conclusion is now inconsistent with reality. How should it update its knowledge base while preserving as much valid reasoning as possible?
This computational challenge—maintaining rational coherence while accommodating contradictory information—lies at the heart of belief revision theory. Unlike classical logic, which simply collapses into triviality when contradictions enter the system, belief revision provides formal frameworks for gracefully updating knowledge states.
The theoretical foundations were laid by Carlos Alchourrón, Peter Gärdenfors, and David Makinson in their landmark 1985 paper, establishing what we now call the AGM framework. Their postulates don't merely describe how agents do change beliefs—they prescribe how rational agents should change beliefs when faced with new information. For AI systems operating in dynamic environments, these principles translate directly into algorithms that maintain logical consistency while minimizing information loss. Understanding belief revision means understanding the computational logic of rational adaptation itself.
AGM Framework: Postulates for Rational Belief Change
The AGM framework models an agent's beliefs as a belief set—a logically closed set of propositions. When new information arrives, the agent must perform one of three operations: expansion (adding consistent information), contraction (removing beliefs), or revision (incorporating potentially contradictory information).
Six postulates govern revision operations. The closure postulate requires the revised belief set to remain logically closed. The success postulate mandates that new information must be included in the result. The inclusion postulate ensures revision doesn't add beliefs beyond what expansion would add if the information were consistent.
The vacuity postulate specifies that if new information doesn't contradict existing beliefs, revision reduces to simple expansion. The consistency postulate requires the revised belief set to remain consistent whenever the new information itself is consistent. Finally, the extensionality postulate ensures that logically equivalent inputs produce identical outputs.
These postulates capture deep intuitions about rational belief change. Success ensures responsiveness to evidence. Inclusion embodies informational economy—don't add beliefs without justification. Consistency prevents the logical explosion that classical systems suffer when contradictions enter.
The computational significance emerges when implementing these postulates. They don't specify which beliefs to remove when contradiction forces contraction—they constrain the space of permissible revision functions. This leaves room for domain-specific heuristics while guaranteeing rational behavior. Representation theorems connect AGM postulates to Grove sphere semantics, where possible worlds are ordered by plausibility, making the framework amenable to algorithmic implementation.
TakeawayRational belief change isn't arbitrary adaptation—it's constrained by formal postulates that guarantee logical coherence while maximizing preservation of existing knowledge.
Contraction and Revision: The Levi and Harper Identities
A fundamental insight of the AGM framework is the interdefinability of contraction and revision through two remarkable identities. The Levi identity defines revision in terms of contraction: to revise by proposition φ, first contract by ¬φ, then expand by φ. The Harper identity moves in the opposite direction: to contract by φ, take the intersection of your current beliefs with the revision by ¬φ.
These identities reveal that contraction—removing beliefs without adding new ones—is the more fundamental operation. Revision can always be decomposed into contraction followed by expansion. This has profound computational implications: implementing rational revision reduces to implementing rational contraction.
Contraction functions face the selection problem. When removing a belief φ, which supporting beliefs should also be removed? The proposition φ might be derivable from multiple independent chains of reasoning. Removing too much violates informational economy; removing too little fails to actually eliminate φ from the belief set.
The AGM framework addresses this through partial meet contraction. First, identify all maximal subsets of the belief set that fail to imply φ—these are the remainder sets. Then use a selection function to choose among them. The contraction result is the intersection of selected remainder sets.
Selection functions encode epistemic priorities—which beliefs the agent is more willing to abandon. Different selection functions yield different AGM-compliant contraction operations. This is where domain knowledge enters: a medical diagnosis system might prioritize test results over symptoms, while a legal reasoning system might prioritize constitutional principles over case precedents. The Levi identity then lifts these priorities to revision, creating revision functions tailored to specific applications.
TakeawayContraction—the logic of giving up beliefs—is primary. Once you can rationally retract, you can rationally revise by first making room for new information.
Iterated Revision: Sequences of Rational Updates
The original AGM framework addresses single revisions, but real reasoning systems face sequences of updates. An autonomous vehicle doesn't receive one piece of information—it continuously integrates sensor streams. The question becomes: how should revision at time t+1 depend on the revision at time t?
The problem is that AGM postulates constrain belief sets, not the conditional structures needed for iterated revision. After revising by φ, your beliefs about what you would believe if you subsequently learned ψ are underdetermined by the basic postulates.
Darwiche-Pearl postulates extend AGM to handle iteration. They require that revising by φ and then by ψ (when ψ implies φ) should give the same result as revising directly by ψ. They also constrain how revision affects the agent's plausibility ordering over possible worlds—the structure that determines future revisions.
Computational implementations typically represent these orderings explicitly. Ordinal conditional functions assign natural numbers to possible worlds, with lower numbers indicating higher plausibility. Revision by φ minimally restructures this ordering to make φ-worlds most plausible. Different restructuring strategies—such as natural revision, lexicographic revision, or restrained revision—yield different iterated revision operators with distinct formal properties.
The choice between strategies has practical consequences. Lexicographic revision treats new information as absolutely reliable, potentially overwriting hard-won inductive generalizations. Natural revision is more conservative, preserving relative plausibilities among worlds consistent with new information. For machine learning systems that must balance responsiveness to new data against stability of learned patterns, these distinctions translate directly into algorithm design decisions.
TakeawaySingle-shot revision theory is incomplete—rational agents need principled strategies for handling belief change as a continuous process, not an isolated event.
Belief revision theory transforms the informal notion of changing your mind into precise computational operations with provable properties. The AGM postulates establish minimal rationality constraints. The Levi and Harper identities reveal the primacy of contraction. Darwiche-Pearl postulates extend these foundations to realistic sequential reasoning.
For AI systems operating in dynamic, uncertain environments, these frameworks provide more than theoretical elegance. They offer implementable algorithms that maintain consistency without sacrificing relevant knowledge. They enable principled trade-offs between responsiveness and stability.
The deeper lesson extends beyond engineering. Belief revision formalizes what it means to be rationally responsive to evidence—not just accepting new information, but integrating it coherently with everything else you know. That's a problem every reasoning system, artificial or biological, must solve.