Science progresses. Few epistemologists deny this. But what exactly does progress mean? One natural answer: our theories are getting closer to the truth. Newton's mechanics approximates reality better than Aristotelian physics. Einstein's relativity improves on Newton. Each step brings us nearer to how things actually are.
This intuitive notion—that false theories can be more or less truthlike—proves surprisingly difficult to formalize. Karl Popper recognized its importance for scientific realism and attempted the first rigorous definition. His proposal seemed elegant: compare what theories get right against what they get wrong. Unfortunately, logicians Pavel Tichý and David Miller independently proved Popper's account failed catastrophically. Any false theory, on his definition, cannot be compared for truthlikeness at all.
The challenge of measuring verisimilitude—distance from truth—remains central to formal epistemology. It connects abstract logical theory to pressing questions about scientific rationality. If we cannot coherently say one false theory is closer to truth than another, scientific realism loses much of its appeal. Fortunately, alternatives to Popper's failed attempt have emerged, employing possible worlds semantics and sophisticated similarity metrics. These approaches offer genuine hope for quantifying what we mean when we claim our theories are approximately true.
Popper's Original Proposal
Popper introduced verisimilitude in Conjectures and Refutations (1963) to address a fundamental question: how can we rationally prefer one false theory over another? His answer involved decomposing a theory's content into two components. The truth-content of a theory T consists of all true propositions entailed by T. The falsity-content comprises all false propositions T entails.
With these notions, Popper's definition appeared straightforward. Theory A is more truthlike than theory B if and only if: (1) A's truth-content exceeds B's truth-content, and (2) A's falsity-content does not exceed B's falsity-content. Alternatively, the falsity-content comparison could be reversed while maintaining the truth-content inequality. The definition captured something intuitive—better theories say more true things and fewer false things.
The proposal seemed to validate scientific realism beautifully. We could now explain why replacing phlogiston theory with oxygen theory constituted progress. The latter entails more truths about combustion and fewer falsehoods. Popper believed he had solved a crucial problem in the philosophy of science.
Then came 1974. Pavel Tichý and David Miller, working independently, published proofs demonstrating Popper's definition was internally inconsistent. Their arguments showed that no false theory can have greater verisimilitude than any other false theory under Popper's criteria. The result was devastating. If one false theory exceeds another in truth-content, it necessarily exceeds it in falsity-content as well.
The refutations exploited logical closure properties. Consider a false theory T containing false proposition p. T entails the disjunction (p ∨ q) for any true q. But if we compare T with a theory containing fewer falsehoods, the logical consequences proliferate in ways that guarantee falsity-content increases whenever truth-content does. Popper's elegant proposal collapsed entirely. The failure wasn't merely technical—it showed the fundamental approach of counting true versus false consequences was misconceived.
TakeawayVerisimilitude cannot be captured by simply tallying truths and falsehoods a theory entails—logical closure creates dependencies that undermine any such accounting.
Possible Worlds Approaches
After Popper's failure, epistemologists sought alternative foundations. The most successful employ possible worlds semantics. Rather than counting consequences, these approaches measure how close a theory's models are to the actual world. A theory is truthlike to the degree that the possible worlds where it holds resemble actuality.
The framework requires a similarity metric on possible worlds. Tichý and Oddie developed accounts using distances in logical space. Imagine possible worlds as points in a multidimensional space where each dimension represents a proposition. The actual world occupies one point. Theories correspond to regions—the worlds where they hold. Truthlikeness becomes geometric: how close is a theory's region to the actual world?
Ilkka Niiniluoto refined these ideas into a sophisticated probabilistic framework. He introduced the notion of expected verisimilitude. Given our evidence, we have probability distributions over which world is actual and over which worlds satisfy a theory. Truthlikeness becomes an expected distance calculation, integrable with Bayesian epistemology. This permits dynamic verisimilitude assessments that update with new evidence.
Different similarity metrics yield different verisimilitude orderings. This might seem problematic—which metric is correct? But the pluralism proves valuable. Different scientific contexts emphasize different respects of similarity. In some domains, quantitative accuracy matters most. In others, structural or qualitative features dominate. The framework accommodates context-sensitivity rather than forcing artificial uniformity.
Technical challenges remain. The approach requires complete descriptions of possible worlds, raising questions about infinite domains and continuous quantities. Aggregating distances across multiple worlds satisfying a theory involves choices about averaging. Yet these are productive technical problems admitting various solutions, unlike Popper's fundamental incoherence. The possible worlds framework provides a genuine foundation for comparative truthlikeness judgments.
TakeawayMeasuring verisimilitude as similarity between possible worlds avoids Popper's logical catastrophe by shifting from consequence-counting to geometric distance in logical space.
Approximate Truth in Science
Formal verisimilitude bears directly on scientific realism—the view that successful scientific theories are approximately true descriptions of reality. Realists claim science progresses toward truth. Anti-realists counter that empirically successful theories have often proven fundamentally false. The pessimistic meta-induction suggests current theories will likely be superseded too.
Verisimilitude theory offers realists a sophisticated response. Superseded theories needn't be equally false. Newtonian mechanics, though strictly false, remains highly truthlike for low-velocity, weak-gravity regimes. Its falsity differs in kind from Aristotelian physics. The realist can maintain that science approaches truth asymptotically, with successive theories increasing in verisimilitude even if none achieves complete accuracy.
Niiniluoto's expected verisimilitude connects to convergence theorems in Bayesian epistemology. Under reasonable conditions, expected verisimilitude increases as evidence accumulates. We have formal grounds for optimism: inquiry conducted via Bayesian updating tends toward more truthlike theories. This provides mathematical support for scientific realism's intuitive appeal.
However, complications arise. Verisimilitude requires comparing theories to the actual world—but we lack independent access to complete truth. Critics argue verisimilitude assessments presuppose the very knowledge we seek. Responses emphasize that approximate truth is fallibly knowable. We can have evidence-based estimates of verisimilitude without certainty, just as we have evidence-based beliefs about ordinary matters.
The formalization ultimately illuminates what realism claims. It's not that any current theory is definitely approximately true. Rather, the concept of approximate truth is coherent, measurable in principle, and central to understanding scientific progress. Whether our theories actually approach truth remains empirical. But the conceptual framework for making sense of that question stands on firm logical ground.
TakeawayFormal verisimilitude rescues scientific realism from pessimistic arguments by showing how false theories can be more or less truthlike—progress doesn't require perfection.
Measuring distance from truth proves harder than Popper anticipated. His elegant proposal—comparing truth-content and falsity-content—failed logically. Tichý and Miller's refutations weren't mere technical difficulties; they revealed fundamental flaws in consequence-based approaches to verisimilitude.
Possible worlds frameworks offer genuine solutions. By reconceptualizing truthlikeness as proximity in logical space rather than tallying consequences, formal epistemologists have built coherent accounts of comparative verisimilitude. Niiniluoto's probabilistic refinements integrate these insights with Bayesian reasoning.
The philosophical payoff extends beyond technical achievement. Scientific realism rests on claims about theories approaching truth. Without coherent verisimilitude concepts, such claims dissolve into metaphor. The formal accounts examined here provide the precision realism needs. False theories differ in their approximation to truth. Science can progress. These claims now have rigorous content.