In 1927, the physicist Werner Heisenberg built a model of the hydrogen atom that treated the electron as if it occupied a single energy level at a time — a simplification everyone knew was false. The electron does far more complicated things. Yet this wrong model predicted spectral lines with stunning accuracy. It worked precisely because it left things out.
This is not an embarrassing footnote in the history of science. It is the central method of scientific modeling. Every model, from climate simulations to economic forecasts to diagrams of cellular metabolism, achieves its power by deliberately misrepresenting the world. The interesting question is not whether models are wrong — they always are — but how their specific wrongness makes them useful.
Understanding why scientists embrace productive falsehood reveals something profound about the social negotiations that determine which simplifications count as acceptable, which distortions are deemed illuminating, and how communities of researchers collectively decide what reality can afford to lose.
The Art of Deliberate Simplification
Every student of physics encounters the frictionless plane — a surface that does not exist and never will. Every student of economics meets the perfectly rational agent — a being who calculates every decision with infinite speed and flawless logic. These are not failures of imagination. They are idealization strategies, deliberate acts of simplification that strip a phenomenon down to the variables a researcher wants to isolate.
The philosopher of science Ernan McMullin distinguished between Galilean idealizations, which simplify by removing known complications, and construct idealizations, which introduce entirely fictional properties to make mathematics tractable. A Galilean idealization might ignore air resistance when modeling a falling ball. A construct idealization might treat a gas as composed of infinitely small, perfectly elastic particles that never actually exist. Both are productive lies, but they lie in different directions.
What makes idealization a social practice rather than merely a logical one is that communities of scientists must agree on which simplifications are acceptable. When economists model markets without accounting for psychological biases, behavioral economists push back — not because the model is wrong (all models are), but because the specific way it is wrong obscures phenomena the discipline should care about. The boundary between useful distortion and harmful distortion is negotiated, not given.
Thomas Kuhn recognized this in his account of normal science: each paradigm carries implicit agreements about which idealizations are standard. Frictionless planes are uncontroversial in Newtonian mechanics. But when a new paradigm emerges, the old idealizations sometimes become the very distortions the revolution seeks to correct. What counts as a productive simplification, it turns out, depends on what your community is trying to see.
TakeawayEvery model gains its explanatory power by leaving things out — and the choice of what to leave out is as much a social negotiation as a logical decision.
How False Models Produce Real Knowledge
Here is the philosophical puzzle at the heart of scientific modeling: if a model is known to be false, how can it teach us anything true about the world? This is not a rhetorical question. Philosophers of science have debated it intensely, and the answers reveal deep tensions in how we understand the relationship between representation and reality.
One influential response comes from the inferential account of models, associated with Mauricio Suárez. On this view, a model does not need to resemble the world to be informative. It needs only to reliably license certain inferences — specific conclusions that a competent user can draw. A London Underground map bears almost no geometric resemblance to the actual tunnel network, yet it lets you plan a journey with perfect reliability. The map is wrong about shape but right about connection, and that is all it needs to be.
This shifts the epistemological question from Is the model true? to What inferences does it support, and for whom? The second question is inherently social. A climate model that reliably predicts global temperature trends may be useless for predicting rainfall in a specific valley. Whether that model counts as knowledge depends on which community is using it and what decisions they face. The model's epistemic value is not intrinsic — it is relational.
Bruno Latour and Steve Woolgar demonstrated in their laboratory ethnography that scientific facts are constructed through chains of inscription — instruments produce traces, traces become data, data become figures, figures become claims. Models sit at the center of these chains, transforming raw observations into portable, combinable knowledge. Their falsity is not a bug; it is the mechanism by which the complexity of the world gets translated into something human communities can argue about, test, and refine.
TakeawayA model does not need to be true to generate knowledge — it needs to reliably support the right inferences for the community using it.
Usefulness Over Truth
The statistician George Box is often quoted: All models are wrong, but some are useful. What is less often appreciated is the radical epistemological claim buried in that sentence. It suggests we should evaluate scientific representations not by their correspondence to reality but by their pragmatic success — their ability to help specific communities accomplish specific goals.
This pragmatic view has deep roots in the philosophy of science. The physicist Pierre Duhem argued in 1906 that scientific theories should be understood as instruments for organizing experience, not as descriptions of an underlying reality. More recently, philosophers like Philip Kitcher and Helen Longino have extended this insight by showing how social values inevitably shape which purposes count as worth pursuing — and therefore which models count as useful.
Consider two models of forest ecosystems. One optimizes for predicting timber yield. Another optimizes for predicting biodiversity loss. Both simplify the same forest in radically different ways. Asking which is truer misses the point; they serve different social interests, embedded in different value systems. The pragmatic framework makes this visible. A correspondence framework — asking which model better mirrors reality — obscures it, because it pretends the choice of what to model accurately is value-free.
This does not collapse into relativism. Some models fail even on their own terms — they do not predict what they promise, or they break down under conditions they claim to cover. Pragmatic evaluation is rigorous, but its rigor is indexed to purpose. The recognition that usefulness is the proper measure of models does not weaken science. It clarifies what science has always been doing: building tools for navigating a world too complex to capture in any single representation.
TakeawayJudging a model by its truth is like judging a hammer by its resemblance to a nail — the real question is whether it helps you build what you need.
The wrongness of scientific models is not a regrettable limitation we hope technology will someday fix. It is the source of their power. By selectively misrepresenting the world, models make the world thinkable, testable, and actionable.
Recognizing this transforms how we understand scientific authority. Science does not earn our trust by producing perfect mirrors of nature. It earns our trust through disciplined communities that openly negotiate which distortions serve which purposes — and hold each other accountable when a model stops being useful.
The next time someone tells you a model is wrong, the productive response is not alarm. It is curiosity: wrong in what way, and useful for whom?
