When you feel a flash of outrage at an injustice, or an instinctive revulsion at a proposed policy, something deep within you is issuing a moral verdict. But what exactly is speaking? Is it a reliable compass pointing toward genuine ethical truth, or merely the echo of selection pressures that shaped your ancestors' brains on the African savanna?
This question sits at the heart of contemporary meta-ethics. Evolutionary debunking arguments suggest that if natural selection can fully explain why we have certain moral intuitions, then those intuitions lose their claim to track moral reality. The implications are profound: if our deepest moral convictions are adaptive accidents, the entire edifice of intuition-based ethics may rest on sand.
Yet dismissing intuitions entirely creates its own crisis. Moral reasoning has to start somewhere, and every philosophical argument eventually bottoms out in judgments that simply seem right. The challenge, then, is not whether to use intuitions but how to use them wisely—developing criteria that distinguish the trustworthy from the suspect.
Debunking Arguments: When Explanation Becomes Undermining
The core logic of an evolutionary debunking argument is deceptively simple. If we can explain why humans hold a moral belief entirely by reference to its survival value—without any appeal to its truth—then we have reason to doubt that belief tracks moral reality. Sharon Street's influential formulation puts the dilemma starkly: either moral truths influenced our evolution (which seems scientifically implausible) or the alignment between our intuitions and moral truth is a massive coincidence.
Consider our strong intuitions about kin favoritism—the conviction that we owe more to our children than to strangers. Evolutionary biology offers a clean explanation: organisms that preferentially aided genetic relatives propagated more successfully. This explanation is complete on its own terms. It doesn't need a moral fact about the special status of kin to work. So should we conclude the intuition is unreliable?
Not so fast. Critics like David Enoch argue that debunking arguments prove too much. If evolutionary origins undermine moral beliefs, the same logic threatens our confidence in basic logical and mathematical intuitions, which are also products of natural selection. This is the companions in guilt response: you cannot selectively debunk moral intuitions without risking epistemic collapse across all domains of reasoning.
The debate reveals a crucial distinction between explaining a belief and explaining it away. That an intuition has evolutionary origins is a fact about its causal history. Whether that history undermines its epistemic status depends on additional philosophical commitments about what moral truth is and how we access it. Naturalists and moral realists will reach very different conclusions from the same evolutionary data.
TakeawayAn explanation of why you hold a belief is not automatically a refutation of that belief. The move from causal origin to epistemic defeat requires additional argument—and that argument is more contested than it first appears.
The Epistemic Role of Intuitions: Starting Points or Evidence?
Philosophers disagree fundamentally about the epistemic weight intuitions deserve. At one end, strong intuitionism—associated with figures like G.E. Moore and W.D. Ross—holds that certain moral intuitions are self-evident: they carry justificatory force simply by being clearly apprehended. On this view, your conviction that torturing innocents for amusement is wrong needs no further argument. It is the evidence.
At the other end, eliminativists about moral intuition argue that gut feelings should play no evidential role at all. Peter Singer has pressed this line, suggesting that we should trust careful reasoning over intuitive reactions, especially when those reactions can be traced to morally irrelevant factors like proximity, vividness, or in-group bias. His trolley-problem analyses illustrate how intuitions fragment under pressure, yielding inconsistent verdicts across structurally similar cases.
A sophisticated middle path is reflective equilibrium, developed by John Rawls. On this model, intuitions serve as provisional data points. We begin with considered judgments—intuitions held under conditions of calm reflection—and then adjust them against general principles, working back and forth until beliefs and principles cohere. No single intuition is sacrosanct, but neither is any theoretical principle immune from revision in the face of strong intuitive resistance.
What reflective equilibrium acknowledges is that moral reasoning is inherently dialectical. We cannot step outside our intuitions to evaluate them from nowhere. But we can subject them to internal critique, testing them for consistency, sensitivity to irrelevant factors, and coherence with our broader moral commitments. The question is not whether intuitions count, but how much and under what conditions.
TakeawayTreating intuitions as infallible oracles or as worthless noise are both errors. The more productive stance is to treat them as defeasible evidence—starting points that earn or lose credibility through a process of critical reflection and mutual adjustment with principles.
Calibrating Intuitions: A Practical Framework for Trust
If intuitions vary in reliability, we need practical criteria for sorting the trustworthy from the suspect. Several markers consistently distinguish robust intuitions from fragile ones. First, consider cross-cultural convergence. Intuitions shared across vastly different societies—prohibitions on unprovoked harm, norms of reciprocity, care for the vulnerable—are harder to debunk than culturally parochial ones. Convergence doesn't guarantee truth, but it raises the evidential bar for dismissal.
Second, check for sensitivity to morally irrelevant factors. Research in moral psychology has shown that intuitions shift based on framing effects, emotional vividness, and even whether someone recently washed their hands. An intuition that reverses when the scenario is described differently—but the morally relevant features remain identical—is one you should hold at arm's length. The trolley problem variants are a textbook illustration: the switch case and the footbridge case are structurally analogous, yet they provoke dramatically different responses.
Third, apply what we might call the informed reflection test. Does the intuition survive exposure to relevant empirical facts and sustained philosophical scrutiny? Many intuitions about bioethical issues—genetic enhancement, end-of-life care, animal welfare—shift substantially when people learn more about the science involved. An intuition that dissolves upon understanding the facts was likely tracking ignorance, not moral truth.
Finally, consider emotional provenance. Disgust-based moral intuitions are particularly suspect because disgust evolved primarily as a pathogen-avoidance mechanism and has been co-opted to rationalize prejudice throughout history. This does not mean every disgust-linked intuition is wrong, but it does mean the burden of justification is higher. A calibrated moral reasoner treats intuitions not as verdicts but as hypotheses—worthy of attention, demanding of scrutiny.
TakeawayBefore trusting a moral intuition, ask: Is it widely shared across cultures? Does it hold steady under different framings? Does it survive informed reflection? And is it driven by emotions—like disgust—that have a track record of leading us astray?
Moral intuitions are neither oracles nor illusions. They are cognitive tools shaped by evolution, culture, and individual experience—tools that can illuminate ethical truth but can also mislead profoundly. The mature response is neither uncritical trust nor wholesale rejection.
What emerges from this analysis is a disposition of calibrated humility. We take our intuitions seriously as starting points, but we subject them to rigorous cross-examination—checking for bias, consistency, cultural parochialism, and sensitivity to irrelevant factors. The intuitions that survive this process earn a provisional seat at the table of moral reasoning.
The deepest lesson may be this: moral thinking is not about finding a foundation that never shifts. It is an ongoing practice of reflection, adjustment, and intellectual honesty about the limits of our own moral perception.