In 1915, Einstein's general relativity made a peculiar prediction: light from distant stars should bend as it passes near the Sun. Four years later, Arthur Eddington's expedition measured exactly this effect during a solar eclipse. The scientific world was transformed overnight—not because Einstein explained something we already knew, but because he predicted something nobody had observed.
This raises a puzzle that cuts to the heart of how science works. Why should we trust a theory more when it predicts new facts rather than simply explaining old ones? After all, the evidence is the same—starlight bends near massive objects. Yet scientists rightly treat successful prediction as more impressive than successful explanation. Understanding this asymmetry reveals something profound about what makes scientific knowledge reliable.
Temporal Priority: When Knowledge Comes First
Imagine two detectives solving a crime. The first examines all the evidence, then constructs a theory that fits perfectly. The second, knowing only a few details, predicts where the murder weapon will be found—and is right. Which detective do you trust more? Most of us sense that the second detective demonstrates something deeper: genuine understanding rather than clever pattern-matching.
This intuition drives what philosophers call the prediction-accommodation asymmetry. When a theory predicts a phenomenon before it's observed, we gain confidence that the theory captures something real about the world. When a theory merely accommodates known facts—explaining them after they're discovered—we face a nagging worry. Perhaps the theory was simply crafted to fit the data, like a custom suit tailored to measurements already taken.
The temporal order matters because it constrains what theorists can do. A scientist building a theory in 1910 cannot peek at 1919's eclipse data. Any successful prediction must come from the theory's own internal logic, not from reverse-engineering the answer. This constraint provides a natural test of whether a theory contains genuine insight or merely summarizes what we already know.
TakeawayA theory that predicts before observing demonstrates understanding; a theory that only explains afterward might just be sophisticated curve-fitting.
Use-Novelty: The Facts That Weren't Used
But temporal order isn't the whole story. Consider a more subtle question: what if a fact was known when a theory was built, but the theorist didn't actually use it? Philosopher Deborah Mayo argues that what matters isn't when evidence was discovered, but whether it was used in constructing the theory.
This concept—called use-novelty—explains why some accommodations carry weight while others don't. When Darwin developed natural selection, he knew about the geographic distribution of species. But his theory wasn't reverse-engineered from this pattern; it emerged from thinking about variation, inheritance, and competition. The distribution facts provided confirmation precisely because they weren't used to build the theory. They were independent tests.
Use-novelty also explains why scientists distrust certain kinds of theorizing. If you have twenty free parameters to adjust, you can fit almost any data. The resulting "explanation" tells us little because the theory could have accommodated almost any outcome. The facts that truly confirm a theory are those it had no opportunity to absorb during construction—facts that genuinely put the theory at risk.
TakeawayEvidence confirms a theory most strongly when the theory had no chance to be shaped by that evidence—regardless of when the evidence was discovered.
Risk Taking: The Virtue of Being Wrong
Here we reach perhaps the deepest insight. Predictions matter because they involve risk. When Einstein predicted light bending, he staked his theory's credibility on a specific, measurable outcome. Had Eddington found no bending, general relativity would have faced serious trouble. This vulnerability is precisely what makes success meaningful.
Philosopher Karl Popper built an entire philosophy of science around this idea. Theories that make bold, risky predictions—predictions that could easily fail—demonstrate their mettle when they survive. Theories that only explain what's already known never face genuine danger. They're like fortune-tellers who only make predictions after the fact: technically accurate, but deeply unimpressive.
This connects to a broader truth about knowledge. We learn most when we stick our necks out. A theory that risks being wrong and survives has proven something about its connection to reality. A theory that merely absorbs whatever data arrives has proven only its flexibility. The courage to be specific, to say "this and not that," separates genuine understanding from sophisticated description.
TakeawayThe theories most worth trusting are those that risked being wrong—and survived. Flexibility in explanation is a vice, not a virtue.
The asymmetry between prediction and accommodation reveals something essential about scientific reasoning. We trust theories that predict novel facts because they demonstrate genuine understanding—they couldn't have cheated by peeking at the answer. They took risks and survived.
This principle applies beyond science. Whenever you evaluate an explanation, ask: did this idea genuinely anticipate what happened, or was it crafted afterward to fit? The answer tells you whether you're witnessing understanding or rationalization.