Classical logic possesses a property that seems entirely reasonable until you examine how humans actually think. If you can derive a conclusion from a set of premises, adding more premises can never invalidate that conclusion. This is monotonicity—the principle that more information only expands what we can prove, never contracts it.

Yet consider this: you learn that Tweety is a bird, so you conclude Tweety can fly. Then you learn Tweety is a penguin. Suddenly your conclusion evaporates. This is not a failure of rationality—it is precisely how competent reasoners navigate a world where general rules admit exceptions. Classical logic cannot model this retraction; it lacks the machinery to withdraw previously sanctioned inferences.

Formal epistemology has developed sophisticated logical systems to capture this phenomenon: non-monotonic logics. These frameworks allow conclusions to be tentatively drawn from incomplete information, then defeated when additional facts emerge. They formalize the defeasible character of default reasoning—the capacity to reason sensibly with what we know while remaining open to revision. Understanding these systems illuminates both the structure of everyday inference and the foundations of artificial intelligence systems that must reason under uncertainty.

Monotonicity and Its Failure

In classical propositional and first-order logic, the consequence relation ⊢ satisfies monotonicity: if Γ ⊢ φ, then Γ ∪ Δ ⊢ φ for any set of formulas Δ. What you can prove from a knowledge base remains provable when you expand that knowledge base. This property underwrites the cumulative nature of mathematical proof—theorems, once established, stand permanently.

The monotonicity of classical deduction reflects a specific epistemic situation: complete and certain information. When your premises exhaustively characterize a domain, additional true information merely makes explicit what was already implicit. Mathematical reasoning typically enjoys this condition. Empirical reasoning typically does not.

Default reasoning operates under radically different epistemic conditions. We reason from incomplete information using defeasible generalizations. "Birds fly" is not a universal quantification—it expresses a default that admits exceptions. When we learn something is a bird, we tentatively conclude it flies, but this conclusion is not logically entailed; it can be overridden.

The formal challenge is precise: how do we construct a consequence relation ~⊦ (read "non-monotonically entails") that sanctions the inference from "x is a bird" to "x flies" while permitting this inference to be defeated by "x is a penguin"? We need ~⊦ to fail monotonicity in controlled ways that track our intuitions about reasonable default inference.

Three major research programs have addressed this challenge: Reiter's default logic, circumscription, and the preferential semantics of Kraus, Lehmann, and Magidor. Each offers a different formal architecture for non-monotonic inference, with distinct commitments about how defaults interact and how conflicts between defaults should be resolved. These are not merely technical variations—they encode substantive philosophical positions about the structure of defeasible reasoning.

Takeaway

Monotonicity is a feature of reasoning under certainty; its failure signals that we are operating with incomplete information and revisable conclusions—a condition that characterizes most human inference.

Default Logic Formalism

Raymond Reiter's default logic (1980) provides one of the most influential formalizations. A default theory is a pair (W, D) where W is a set of first-order formulas representing certain knowledge, and D is a set of default rules. Each default has the form α : β₁, ..., βₙ / γ, read: "If α is believed and β₁, ..., βₙ are each consistent with what is believed, then conclude γ."

The components have distinct epistemic roles. The prerequisite α must be derivable from current beliefs. The justifications β₁, ..., βₙ are consistency checks—the rule fires only if each βᵢ is not contradicted by current beliefs. The consequent γ is added to beliefs when the rule applies. The classical bird example becomes: Bird(x) : Flies(x) / Flies(x)—if x is a bird, and it is consistent to assume x flies, conclude x flies.

An extension of a default theory is a deductively closed set of formulas representing a maximal coherent set of conclusions that can be drawn by applying defaults. The formal definition is a fixed-point construction: E is an extension if E equals the closure of W under classical logic plus the consequents of all defaults whose prerequisites are in E and whose justifications are consistent with E.

Crucially, a default theory can have multiple extensions or no extension at all. Consider two defaults: α : β / β and α : ¬β / ¬β, with W = {α}. Neither default blocks the other initially, but each blocks the other once applied. The result is two extensions—one containing β, one containing ¬β—representing incompatible but individually coherent ways of extending our beliefs.

The existence of multiple extensions reflects genuine epistemic indeterminacy. When defaults conflict and neither takes priority, the formalism correctly represents that there are multiple rationally permissible ways to proceed. This is not a bug but a feature: default logic does not fabricate determinacy where none exists. Computational complexity results show that determining whether a formula belongs to some (or all) extensions is PSPACE-complete, reflecting the genuine difficulty of defeasible reasoning.

Takeaway

Default logic separates what we know with certainty from rules we apply tentatively, and it honestly represents cases where multiple incompatible conclusions are equally supported.

Preferential Semantics

The KLM framework—developed by Kraus, Lehmann, and Magidor in their seminal 1990 paper—takes a different approach. Rather than specifying rules for belief extension, it characterizes non-monotonic consequence relations through properties they should satisfy. This is an axiomatic methodology: identify the principles that any reasonable defeasible inference relation must obey.

The central innovation is preferential semantics. Possible worlds (or states) are equipped with a preference ordering where more preferred states represent more "normal" situations. A conditional assertion α ~⊦ β holds when β is true in all the most preferred states where α is true. "Birds fly" is validated because in the most normal worlds where something is a bird, it flies—even though abnormal bird-worlds (containing penguins) exist.

The KLM framework identifies a hierarchy of consequence relations. Cumulative reasoning satisfies reflexivity, left logical equivalence, right weakening, cut, and cautious monotony. These ensure basic coherence: what you conclude doesn't depend on logically irrelevant reformulations, and conclusions can be used as premises without changing what follows. Preferential reasoning adds the property "Or": if α ~⊦ γ and β ~⊦ γ, then (α ∨ β) ~⊦ γ.

Rational consequence relations satisfy an additional property called Rational Monotony: if α ~⊦ γ and α ~⊦̸ ¬β, then (α ∧ β) ~⊦ γ. This captures the intuition that adding information that isn't explicitly contradicted shouldn't defeat a conclusion. Rational consequence relations are characterized by ranked preferential models, where the preference ordering is modular (totally preordered). This connects directly to probability: ranks can be interpreted as orders of magnitude of probability, linking non-monotonic logic to Bayesian reasoning.

The KLM framework provides a representation theorem: a consequence relation satisfies certain axioms if and only if it is generated by a preferential (or ranked) model. This result transforms the study of non-monotonic reasoning from a proliferation of competing formalisms into a systematic investigation of properties and their model-theoretic counterparts. It reveals that seemingly disparate approaches often characterize the same underlying semantic structures.

Takeaway

Preferential semantics grounds defeasible reasoning in a notion of normalcy: conclusions hold in the most typical cases, and formal properties constrain how conclusions can shift as information changes.

Non-monotonic logics solve a fundamental problem in formal epistemology: capturing the revisability of inference without collapsing into logical anarchy. Default logic, preferential semantics, and related frameworks show that defeasibility can be made precise—that retracting conclusions in light of new information follows principled patterns susceptible to mathematical analysis.

These formalisms matter beyond philosophy. AI systems that reason about the world cannot treat all information as certain and all inferences as permanent. Knowledge representation, diagnostic reasoning, and legal argumentation all require logics that handle exceptions and defaults. The theoretical foundations developed here underwrite practical applications.

What emerges is a refined understanding of rationality itself. Monotonic reasoning is not the gold standard that defeasible reasoning falls short of—it is a special case appropriate to special epistemic circumstances. Most rational inference is defeasible by nature. Formal epistemology's achievement is showing that this defeasibility has structure: structure that can be axiomatized, semantically characterized, and computationally investigated.