We live in an age that reveres science—and rightly so. Scientific methods have revealed truths about the universe that our ancestors could scarcely imagine. So when moral disagreements arise, it's tempting to think that if we just gathered enough data, sequenced enough genomes, or scanned enough brains, we could finally settle questions about right and wrong.
But here lies a persistent confusion that philosophers have warned about for centuries. The naturalistic fallacy—the attempt to derive moral conclusions directly from factual premises—represents one of the most fundamental errors in ethical reasoning. Understanding why this matters isn't mere academic pedantry; it shapes how we think about everything from evolutionary psychology to medical ethics.
This doesn't mean science is irrelevant to morality. Far from it. But recognising the logical gap between what is and what ought to be is essential for clear thinking about ethics. Let's examine why this distinction matters and how we can navigate it without falling into common traps.
The Is-Ought Gap: Hume's Enduring Insight
In 1739, David Hume made an observation that continues to shape moral philosophy. He noticed that writers often begin with factual statements—about God, human nature, or society—and then suddenly shift to claims about what we ought to do. This transition, Hume argued, requires justification that's rarely provided.
The logical point is straightforward. No collection of purely descriptive premises can, by themselves, entail a normative conclusion. 'Humans naturally favour their kin' doesn't logically yield 'We should favour our kin.' 'Certain behaviours increase reproductive fitness' doesn't entail 'We should pursue reproductive fitness.' The gap between description and prescription requires an additional normative premise to bridge it.
G.E. Moore later refined this insight into what he called the 'naturalistic fallacy'—the mistake of identifying moral properties like goodness with natural properties like pleasure or evolutionary fitness. If 'good' simply meant 'pleasurable,' then asking 'Is pleasure good?' would be meaningless, like asking 'Is pleasure pleasurable?' But the question remains genuinely open, suggesting goodness isn't reducible to any natural property.
This doesn't mean facts are irrelevant to ethics. Rather, it means facts alone are insufficient. Every moral argument smuggles in at least one normative premise, whether stated or assumed. Recognising this forces us to identify and examine those hidden assumptions rather than pretending our ethics flow directly from science.
TakeawayFactual premises can inform moral conclusions but never entail them on their own. Every ethical argument contains at least one normative premise—the real work lies in identifying and defending it.
Science as Informant, Not Arbiter
If science can't settle ethics, what role does it play? The answer is: a crucial but circumscribed one. Empirical findings inform our moral reasoning without determining its conclusions. They tell us what's possible, what's probable, and what the likely consequences of different choices might be.
Consider a practical example. Neuroscience might reveal that human beings have limited capacities for impartial concern—we're wired to care more about those close to us. This finding is morally relevant. It might suggest that moral systems demanding perfect impartiality are unrealistic, or that we need institutions to correct for our biases, or that we should structure society to account for these limitations. But none of these conclusions follows automatically from the neuroscience.
Similarly, evolutionary psychology might explain why we have certain moral intuitions—our disgust responses, our sense of fairness, our tribal tendencies. These explanations can be illuminating. They might make us suspicious of intuitions that served our ancestors well but seem poorly suited to modern contexts. Yet understanding the origin of an intuition tells us nothing about whether that intuition is correct.
The proper relationship is collaborative. Science reveals the landscape of possibility within which moral reasoning operates. It constrains which ethical theories are feasible given human nature. It predicts consequences of different policies. But deciding which possibilities to pursue, which constraints to accept, and which consequences matter—these remain normative questions requiring normative answers.
TakeawayScience illuminates the terrain of moral choice—revealing what's possible, probable, and consequential—but choosing our path through that terrain requires normative commitments science cannot provide.
Spotting the Fallacy in the Wild
The naturalistic fallacy appears constantly in public discourse, often disguised as scientific authority. Learning to recognise its forms helps inoculate against sloppy moral reasoning—including our own.
One common version appeals to nature directly: 'X is natural, therefore X is good' (or its inverse: 'X is unnatural, therefore X is wrong'). But diseases are natural. Medicine is unnatural. The natural/unnatural distinction carries no inherent moral weight. Arguments that invoke it require additional premises explaining why naturalness should matter morally—premises that are themselves contestable.
Another form involves evolutionary debunking. 'Our moral intuitions evolved for reproductive success, not truth-tracking, therefore we shouldn't trust them.' But this argument proves too much. Our rational faculties also evolved for reproductive success. If evolutionary origins undermine reliability, they undermine our ability to make any argument, including this one. The proper conclusion isn't wholesale scepticism but careful evaluation of which intuitions might be distorted by their evolutionary history.
A subtler version confuses moral progress with scientific progress. We might discover that certain social arrangements maximise well-being. But 'We should maximise well-being' isn't a scientific finding—it's a normative commitment that must be argued for on its own terms. Different ethical frameworks prioritise different values: rights, virtues, fairness, care. Science cannot adjudicate between them because the dispute isn't about facts but about what matters.
TakeawayWhen someone claims science has resolved a moral question, ask: what normative premise connects their facts to their conclusion? That premise is where the real moral argument lives—and where scrutiny should focus.
The naturalistic fallacy isn't a dismissal of science but a clarification of its role. Science gives us extraordinary power to understand the world as it is. Ethics grapples with how it should be. These are different enterprises requiring different methods, even as they inform each other.
Recognising this distinction should breed intellectual humility. Our moral convictions rest on normative foundations that cannot be proven the way theorems are proven or hypotheses tested. This doesn't make ethics arbitrary—we can still give reasons, examine consequences, and test for consistency. But it does mean moral disagreements are often deeper than factual disputes.
Perhaps this is precisely why ethics matters so much. If science could settle moral questions, ethics would be merely technical. Instead, it remains irreducibly human—a domain where we must take responsibility for our values, not just discover them.