In 1915, Einstein's general theory of relativity predicted that starlight passing near the Sun would bend by a precise amount—1.75 arcseconds. Four years later, Arthur Eddington's eclipse expedition confirmed it. The theory described invisible curvatures in spacetime that no one had ever observed, yet it got the numbers right. How do you explain that kind of success if the theory isn't, in some meaningful sense, true?
This question sits at the heart of one of philosophy of science's most persistent debates. Hilary Putnam crystallized it in 1975 with what has become known as the no-miracles argument: the empirical success of mature scientific theories would be miraculous unless those theories at least approximately describe the world as it actually is. If our theoretical entities—electrons, spacetime curvature, natural selection—are mere fictions, then their repeated predictive triumphs become an extraordinary coincidence demanding explanation.
Yet the argument is far from settled. The history of science is littered with theories that were spectacularly successful and spectacularly wrong—caloric fluid, the luminiferous ether, phlogiston. If past success didn't guarantee truth then, why should we trust it now? The debate over scientific realism has grown more sophisticated in response, producing refined positions that try to honor both the genuine explanatory power of the miracle argument and the humbling lessons of scientific history. What emerges is not a simple answer but a deeper understanding of what it means for a theory to be approximately true.
Success Requires Truth
The no-miracles argument begins with an observation so familiar it can seem trivial: science works. Pharmaceuticals cure diseases. Satellites maintain orbit. Semiconductor chips process billions of operations per second. These are not lucky guesses. They are the outputs of theories that describe entities and processes far beyond direct observation—quantum tunneling, gravitational fields, molecular binding affinities. The argument insists that this remarkable track record demands explanation.
Putnam's formulation is deceptively simple. The best explanation for the empirical success of science is that our mature theories are approximately true—that the entities they posit roughly exist and the relationships they describe roughly hold. The alternative, that theories succeed despite being fundamentally wrong about the furniture of the world, would make scientific progress a cosmic coincidence. Realists argue this is explanatorily bankrupt, like insisting that a map consistently guides you to your destination even though it bears no relation to the actual terrain.
The argument gains force when you consider novel predictive success—cases where a theory predicts phenomena it was never designed to accommodate. When Maxwell's electromagnetic equations predicted radio waves decades before Hertz detected them, or when the Standard Model predicted the Higgs boson's mass range before the LHC confirmed it, the realist sees something more than instrumental convenience. These are theories reaching beyond their original data and finding the world waiting exactly where they said it would be.
Importantly, the argument is an inference to the best explanation—the same reasoning scientists themselves use constantly. If the realist's inference pattern is illegitimate here, it becomes difficult to see why it should be legitimate anywhere in science. Anti-realists must either accept this reasoning selectively or reject a mode of inference that permeates scientific practice itself. This reflexive quality gives the miracle argument a particular bite: denying realism seems to undermine the very explanatory reasoning that makes science powerful.
Critics have pushed back by arguing that the no-miracles argument commits a base-rate fallacy. Larry Laudan and others point out that we observe only the successful theories—the ones that survived. We don't see the graveyard of failed theoretical posits. Selecting on success and then inferring truth from it may be circular. The realist must show not just that successful theories tend to be true, but that truth is the best available explanation for the specific character of that success—its precision, its novelty, its fertility across domains.
TakeawayWhen a theory consistently predicts phenomena it was never designed to explain, treating its core claims as approximately true is not naive faith—it is the most parsimonious explanation for a pattern that would otherwise be inexplicable.
Historical Counterexamples
If the miracle argument were airtight, the debate would be over. It isn't, and the most powerful objection comes from the history of science itself. Laudan's pessimistic meta-induction assembles a damning catalogue: theories that were empirically successful by any reasonable standard, yet whose central theoretical entities we now regard as nonexistent. Caloric fluid explained heat transfer. The luminiferous ether explained light propagation. Phlogiston explained combustion. Each generated accurate predictions. Each turned out to be fundamentally mistaken about what the world contains.
Consider the ether. Nineteenth-century physicists used it to derive Fresnel's equations for the reflection and refraction of light with extraordinary precision. Those equations still work—they appear in modern optics textbooks. But the entity that supposedly underwrote them, a rigid yet frictionless medium permeating all of space, does not exist. The predictive success was real; the ontology was fictional. If success guaranteed approximate truth, the ether should have been real.
The pessimistic meta-induction generalizes from these cases: since many past successful theories were false, we have inductive grounds for believing that our current successful theories are probably false too. This is a direct assault on the realist's core inference. It suggests that the relationship between empirical success and theoretical truth is far more tenuous than the miracle argument assumes. Success, it seems, can arise from getting the structure partially right while being deeply wrong about what populates that structure.
Anti-realists like Bas van Fraassen press this further with constructive empiricism. Science aims not at truth but at empirical adequacy—theories need only "save the phenomena," accurately describing observable regularities without their unobservable posits being true. On this view, the success of science is explained by a Darwinian selection process: theories compete, and the empirically adequate ones survive. No appeal to truth is needed, just as we don't need to invoke divine design to explain why organisms fit their environments.
The force of these counterexamples is not that they refute realism outright but that they raise the burden of proof. A naive realism that simply equates success with truth cannot survive Laudan's list. Any defensible realism must explain why certain kinds of success are reliable indicators of truth while others are not. This challenge has driven the most interesting developments in the realism debate over the past four decades—a search for principled criteria that separate the theoretical wheat from the chaff.
TakeawayThe history of science teaches that empirical success is necessary but not sufficient for theoretical truth—a reminder that our confidence in any theory should be calibrated not to its predictions alone, but to the specific structural features that generate those predictions.
Refined Realism
The most sophisticated responses to the pessimistic meta-induction don't retreat from realism—they refine it. The key move is selectivity: realists need not commit to the truth of entire theories. Instead, they can identify which parts of successful theories are doing the genuine explanatory work and restrict their realist commitments accordingly. This approach, broadly called selective realism, comes in several varieties, each offering a different criterion for where to place ontological trust.
John Worrall's structural realism argues that what is preserved across theory change is not the nature of unobservable entities but the mathematical structure of our theories. Fresnel's equations survived the death of the ether because the structural relationships they encode—between angles, wavelengths, and refractive indices—tracked something real about the world, even though the entity supposedly instantiating that structure did not exist. On this view, science progressively captures the world's relational architecture, even when it misdescribes the relata.
Philip Kitcher and Stathis Psillos offer a different refinement: entity realism and the notion of working posits versus idle posits. The theoretical components that are genuinely responsible for a theory's novel predictive success—the ones that feature essentially in the derivation of those predictions—are the ones we should believe in. Idle theoretical baggage that plays no role in generating success earns no realist commitment. When Psillos reexamines Laudan's historical cases, he argues that the posits actually driving predictive success were often retained in successor theories, while the ones abandoned were doing no real work.
This selective strategy transforms the debate from a binary question—are our theories true or not?—into a more nuanced investigation of which theoretical commitments are well-supported and why. It also aligns with scientific practice itself. Working scientists routinely distinguish between the parts of a model they take seriously and the parts they regard as convenient idealizations. The selective realist gives philosophical articulation to a discrimination scientists already make intuitively.
The debate continues to evolve. Recent work on perspectival realism by Michela Massimi argues that scientific knowledge is always situated within particular theoretical perspectives, yet this perspectival character is compatible with genuine claims about mind-independent reality. Meanwhile, challenges from quantum mechanics—where empirical success is extraordinary but ontological interpretation remains deeply contested—test every version of realism. What the refined debate reveals is that the miracle argument's core insight survives: success demands explanation. But the explanation itself must be as sophisticated as the science it seeks to vindicate.
TakeawayThe most defensible realism is not a blanket endorsement of everything a theory says, but a disciplined commitment to the specific structural and theoretical components that are genuinely responsible for its predictive power—a realism earned, not assumed.
The miracle argument endures because it captures something genuinely puzzling about science. Theories reach into domains we cannot see, posit entities we cannot touch, and return with predictions of startling precision. Dismissing this as coincidence feels intellectually irresponsible. Yet the history of abandoned-but-successful theories means any honest realism must be chastened realism—aware of its own fallibility.
What emerges from decades of debate is not a triumphant proof that our theories are true, but something more valuable: a framework for calibrating confidence. We learn to ask which parts of a theory are load-bearing, which structural features persist through revolutions, and where our ontological commitments are genuinely earned by predictive labor rather than inherited by theoretical inertia.
Science's success is not a miracle. But neither is it a blank check for belief. The refined realist position teaches us to hold our best theories with a particular combination of conviction and humility—trusting the architecture while remaining open to discovering that the furniture needs rearranging.