When a scientific theory clashes with new evidence, why don't scientists simply abandon it? From the outside, theoretical stubbornness looks like intellectual failure—a refusal to follow the data wherever it leads. But the relationship between evidence and theory is far more intricate than popular accounts of scientific method suggest.
The standard narrative says science advances through bold conjecture and ruthless refutation. A theory makes predictions, experiments test them, and failed predictions kill the theory. This tidy picture, inherited from a simplified reading of Karl Popper, obscures something crucial: evidence almost never speaks directly against a theory in isolation. The decision to defend or abandon a framework involves layers of judgment that no simple rule can capture.
Understanding why scientists sometimes cling to theories under pressure isn't just an exercise in philosophy of science. It reveals something fundamental about how communities of inquiry navigate uncertainty—and why the line between rational persistence and irrational dogmatism is far thinner than we might wish.
Why Evidence Alone Can't Kill a Theory
When an experiment yields results that contradict a theory's predictions, it might seem obvious what should happen next: the theory is wrong, so discard it. But Pierre Duhem and Willard Van Orman Quine identified a fundamental complication that undermines this straightforward logic. No scientific theory faces the tribunal of experience alone—it always arrives bundled with auxiliary assumptions, background theories, and instrumental calibrations.
Consider a concrete case. When nineteenth-century astronomers noticed that Uranus wasn't following the orbit predicted by Newtonian mechanics, they didn't conclude Newton was wrong. Instead, they questioned an auxiliary assumption—that all relevant gravitational bodies had been accounted for. This led to the prediction and subsequent discovery of Neptune. The anomaly, far from refuting Newton, became a triumph for the theory.
This is the Duhem-Quine problem in action. When prediction meets contradicting evidence, the logical structure permits multiple responses. Scientists can revise the core theory, but they can equally revise any auxiliary hypothesis supporting the prediction. The evidence itself doesn't dictate which component should absorb the blame. This underdetermination means that holding onto a theory in the face of apparent refutation isn't necessarily irrational—it can be a perfectly logical response.
The implications extend well beyond individual episodes. If evidence can never deliver a decisive verdict against a theory on its own, then theory evaluation becomes an inherently social process. Communities of scientists must collectively negotiate which assumptions to protect, which to revise, and when the accumulation of anomalies warrants more fundamental rethinking. The Duhem-Quine problem doesn't make science arbitrary, but it reveals that scientific judgment involves far more than reading nature's verdicts off experimental results.
TakeawayEvidence never speaks with a single voice. When theory meets contradicting data, the logical structure always permits multiple responses—which means deciding what the evidence means is inescapably a communal judgment, not a mechanical procedure.
Why Premature Abandonment Is Its Own Kind of Error
There's a widespread assumption that intellectual virtue lies in being maximally responsive to contrary evidence—that the best scientist is the one quickest to change their mind. But this overlooks a genuine epistemic danger running in the opposite direction: abandoning a productive framework too soon and losing the accumulated understanding it generated.
Mature scientific theories represent enormous investments of cognitive labor. They come equipped with problem-solving techniques, refined mathematical formalisms, trained practitioners, and a track record of past successes. Thomas Kuhn called this infrastructure normal science—the detailed puzzle-solving work that produces the bulk of scientific knowledge. When anomalies appear, they don't automatically erase these achievements. A theory that has explained a wide range of phenomena doesn't lose that explanatory power because of one or even several stubborn observations.
History bears this out repeatedly. The kinetic theory of gases faced serious anomalies regarding specific heat ratios for decades before quantum mechanics eventually resolved them. Had physicists abandoned the kinetic theory at the first sign of trouble, an entire framework of productive research would have been lost prematurely. The anomalies were real, but patience proved more fruitful than quick rejection would have been.
This suggests that some degree of theoretical conservatism functions as an epistemic virtue within scientific communities. Not every scientist should respond identically to anomalies—a division of epistemic labor benefits the community as a whole. Some researchers rightly defend existing frameworks while others pursue alternatives. The community-level rationality of science depends on this diversity of commitments, even when some look, in retrospect, like stubbornness. What matters is that the community collectively explores the full space of possibilities.
TakeawayQuick responsiveness to contrary evidence isn't always a virtue. Sometimes the most rational thing a scientific community can do is tolerate anomalies while the deeper strengths of a productive theory continue generating knowledge.
When Defense Becomes Dogma
If some resistance to anomalies is rational, how do we tell when legitimate defense crosses into unproductive dogmatism? The philosopher Imre Lakatos offered a framework that remains remarkably useful here. He distinguished between progressive and degenerating research programs based on a deceptively simple criterion: whether theoretical modifications generate genuinely new predictions or merely paper over existing problems.
A progressive research program responds to challenges with adjustments that not only accommodate problematic evidence but also predict novel phenomena not previously observed. The modification adds content—it tells us something new about the world. A degenerating program, by contrast, responds to anomalies with ad hoc patches that do nothing beyond shielding the theory from refutation. Each fix addresses a specific problem without generating any additional testable claims.
The distinction is easier to state than to apply in real time. Whether a modification counts as genuinely novel or merely protective often remains unclear for years or decades. Lakatos himself acknowledged that research programs can appear to degenerate and then revive unexpectedly. There is no instant litmus test for when persistence becomes pathological—which is precisely why institutional structures and community norms matter so much.
This is where social epistemology becomes indispensable. Recognizing degeneration cannot be done mechanically by isolated individuals. It requires open critical dialogue within scientific communities—what Helen Longino calls transformative criticism. Communities need functioning mechanisms for scrutiny: peer review, genuine intellectual diversity, and structures allowing dissenting voices to be heard without career penalty. The rationality of science doesn't reside in any individual's judgment about when to abandon a theory. It resides in the collective processes that enable communities to recognize when defense has become dogma.
TakeawayThe difference between productive persistence and unproductive dogmatism isn't visible from inside a single mind—it becomes apparent only through the sustained critical dialogue of a well-functioning scientific community.
The question of when to hold onto a theory and when to let go has no algorithmic answer. The Duhem-Quine problem ensures evidence alone cannot decide, rational persistence means quick abandonment carries real costs, and the progressive-degenerating distinction offers guidance without guarantees.
What emerges is that scientific rationality is fundamentally a community achievement. No individual scientist needs to get the timing perfectly right. What matters is that scientific institutions sustain the conditions for genuine critical exchange—allowing both defenders and challengers to do their work.
The lesson isn't that science is arbitrary or merely social. It's that getting knowledge right requires getting our institutions right. Epistemic virtue is, in the end, an institutional project.