A growing body of experimental philosophy research suggests that ordinary people are not the naive consequentialists that utilitarian theorists once assumed. When presented with trolley-style dilemmas, sacrificial scenarios, and rights-violation cases, participants across cultures reliably exhibit patterns of moral judgment that track deontological constraints—prohibitions against using people as mere means, distinctions between doing and allowing harm, and sensitivity to intention that no purely outcome-based framework can explain. These findings raise a provocative question: does common sense morality contain an implicit philosophical theory worth taking seriously?

The stakes of this question extend well beyond academic curiosity. Machine ethics systems, legal reasoning frameworks, and AI alignment strategies increasingly draw on folk moral intuitions as training data or normative benchmarks. If ordinary moral cognition embeds a coherent—if implicit—moral philosophy, then these intuitions deserve more than dismissal as evolutionary noise or cognitive bias. They may constitute evidence about moral reality, or at minimum, a sophisticated heuristic architecture refined over millennia of social coordination.

But the picture is not entirely flattering. Alongside the surprising coherence of folk moral judgment lies a tangle of inconsistencies, framing effects, and systematic biases that complicate any straightforward vindication of common sense morality. This article examines three dimensions of the problem: the experimental evidence for deontological structure in folk morality, the implicit principles that best systematize ordinary judgment, and the fractures where common sense morality breaks down under its own internal tensions.

Folk Deontology Evidence: The Constraint Structure of Ordinary Moral Judgment

The experimental record on folk moral intuitions has matured considerably since Joshua Greene's pioneering fMRI studies in the early 2000s. Greene's dual-process model originally framed deontological responses as emotional reactions and consequentialist reasoning as the product of deliberative cognition. But subsequent research has substantially complicated this picture. Studies by Mikhail, Cushman, and others demonstrate that folk moral judgments track structural features of causal and intentional relations—not merely emotional intensity—in ways that align with sophisticated deontological principles.

Consider the robust finding that people distinguish between harm caused as a means versus harm caused as a side effect. This doing-allowing distinction and the means-side-effect distinction appear across age groups, cultures, and even in populations with limited formal education. Cushman's (2008) work showed that participants are sensitive to the difference between intended and foreseen consequences even when outcome severity is held constant. This is not what a population of implicit consequentialists would produce.

The doctrine of double effect—the principle that it can be permissible to cause harm as a foreseen side effect of a good action but impermissible to cause identical harm as a means—finds striking empirical support in folk judgment patterns. John Mikhail's computational analysis of moral cognition argues that something functionally equivalent to this doctrine operates as a universal moral grammar, analogous to Chomsky's linguistic competence. Participants apply it without being able to articulate it, much as native speakers apply grammatical rules they cannot state.

More recent work using process dissociation procedures (Conway & Gawronski, 2013) has shown that deontological and consequentialist inclinations operate as independent processes rather than poles of a single continuum. People can score high on both, or low on both, which undermines the simple narrative that deontological judgment is merely the absence of consequentialist reasoning. Folk morality appears to have genuine constraint-based architecture—rules that prohibit certain actions regardless of aggregate welfare gains.

Critically, this deontological structure is not merely a Western artifact. Cross-cultural studies by Abarbanell and Hauser (2010) with Mayan populations, and Ahlenius and Tännsjö's (2012) comparative work, reveal that the core structural distinctions—personal versus impersonal harm, action versus omission, means versus side effect—emerge in populations with radically different moral traditions. Whatever drives these patterns, it runs deeper than cultural instruction.

Takeaway

Ordinary moral cognition is not a crude consequentialist calculator overlaid with emotional noise—it embeds constraint-based structural principles that track deontological distinctions with surprising precision and cross-cultural stability.

Implicit Moral Principles: Reverse-Engineering the Folk Moral Framework

If folk morality reliably tracks deontological constraints, the next question is which principles best systematize the pattern. This is the project of descriptive moral theory—not asking what people should believe, but modeling the implicit rules that generate their actual judgments. The results suggest a framework more nuanced than any single canonical ethical theory, incorporating elements of rights-based constraints, threshold deontology, and agent-relative prerogatives.

The most robustly supported principle is a constraint against instrumental harm: using a person's body or welfare as a causal lever to produce benefits for others triggers strong moral prohibition. This principle explains why pushing someone off a bridge to stop a trolley is judged far worse than diverting the trolley via a switch, even when casualties are identical. Mikhail's formal analysis models this as a prohibition on battery—purposeful, unconsented bodily contact used as a means—embedded at a deep level of moral cognition.

A second systematic principle involves agent-relative prerogatives—the folk intuition that individuals are not morally required to maximize aggregate welfare at arbitrary personal cost. Experimental work by Sachdeva, Iliev, and Medin (2009) and others confirms that people reliably judge that moral agents have special permissions regarding their own projects, relationships, and bodily integrity. This maps onto the philosophical concept of agent-centered options: you may decline to sacrifice your kidney even when doing so would save five lives, and ordinary moral cognition treats this as permissible rather than selfish.

A third structural feature is moral luck sensitivity modulated by intent. Folk moral judgments show a complex interaction between outcome severity and mental state. Cushman's (2008) experiments demonstrate that people punish attempted harms (bad intent, no bad outcome) and penalize negligent harms (no bad intent, bad outcome) using partially independent mechanisms. The resulting system is neither purely intent-based nor purely outcome-based—it operates as a multi-factor framework where both variables contribute but intent typically dominates in cases of direct action.

What emerges from this reverse-engineering is not a tidy philosophical theory but something more like a pluralistic moral operating system. It incorporates deontological constraints as near-absolute prohibitions, consequentialist considerations that modulate judgments at the margins, and virtue-like assessments of character and intent. This pluralism may itself be a feature rather than a bug—a moral architecture optimized for the diverse coordination problems of human social life rather than for theoretical elegance.

Takeaway

The implicit moral framework embedded in folk judgment is not utilitarian, not purely Kantian, but a pluralistic operating system that integrates constraints against instrumental harm, agent-relative prerogatives, and dual sensitivity to intent and outcomes.

Gaps and Inconsistencies: Where Common Sense Morality Fractures

For all its structural sophistication, folk morality is not internally consistent. Experimental philosophy has identified systematic fractures—places where ordinary moral judgment contradicts itself or collapses under reflective scrutiny. These inconsistencies are not mere noise; they reveal the seams between cognitive subsystems and the limits of heuristic moral architecture.

The most well-documented inconsistency involves framing effects on moral judgment. Petrinovich and O'Neill (1996) and subsequent studies show that logically equivalent moral scenarios produce different judgments depending on whether outcomes are described in terms of lives saved versus lives lost. This violates any coherent moral principle—deontological or consequentialist—because the underlying moral facts are identical. The framing effect reveals that folk moral cognition is partly driven by the presentation of information rather than its content, which is difficult to reconcile with the idea that common sense tracks genuine moral principles.

A second fracture appears in the scope insensitivity of folk moral concern. People's moral outrage and willingness to act scale poorly with the number of individuals affected—a phenomenon Slovic (2007) has called "psychic numbing." Folk morality assigns enormous weight to identifiable individuals and proximate harms while drastically underweighting statistical lives and distant suffering. This pattern is coherent with neither consequentialism (which demands proportional response to magnitude) nor standard deontology (which should not vary with emotional salience in this way).

A third class of inconsistency involves status quo bias and omission bias. People judge harmful actions as substantially worse than equally harmful omissions, and they treat existing states of affairs as morally privileged over potential alternatives. While the action-omission distinction has philosophical defenders, experimental evidence suggests folk application of this distinction is far more extreme than any principled version can justify. Spranca, Minsk, and Baron (1991) demonstrated that people morally condemn an agent who poisons a colleague but largely excuse one who fails to warn about identical poisoning—even when the failure is deliberate.

These inconsistencies do not necessarily invalidate the deontological structure documented earlier. But they impose a crucial constraint on how seriously we can take folk morality as evidence for moral truth. The implicit moral philosophy of common sense is partially coherent—impressively structured in some domains, systematically distorted in others. Any attempt to use folk intuitions as data for ethical theory must grapple with this mixed verdict: common sense morality contains genuine moral insight embedded in a matrix of cognitive limitation.

Takeaway

Common sense morality is neither moral bedrock nor mere bias—it is a partially coherent system whose genuine structural insights must be carefully disentangled from the framing effects, scope insensitivity, and omission biases that distort it.

The experimental philosophy of folk morality reveals a cognitive landscape more structured than skeptics assumed and more flawed than defenders hoped. Ordinary moral judgment embeds real deontological architecture—constraints against instrumental harm, sensitivity to intent, agent-relative permissions—that no consequentialist reduction can capture. These are not philosophical illusions produced by emotional interference; they are systematic features of moral cognition with cross-cultural stability.

Yet this architecture operates alongside genuine distortions: framing effects that shift verdicts without shifting facts, scope insensitivity that renders mass suffering psychologically invisible, and omission biases that let agents escape moral scrutiny through strategic inaction. The implicit moral philosophy of common sense is partially reliable—a map that captures important terrain while systematically misrepresenting other regions.

For machine ethics, AI alignment, and normative philosophy alike, the practical implication is clear: folk moral intuitions are indispensable evidence but unreliable oracles. The task is not to defer to common sense or dismiss it, but to develop principled methods for distinguishing its insights from its artifacts—reverse-engineering the moral operating system well enough to know which outputs to trust.