A 2023 study by Engelmann and colleagues in Cognition presented participants with structurally identical harms—one carried out by a single agent, the other distributed across a chain of five contributors. Despite equivalent outcomes, participants consistently assigned less blame to each individual in the chain. This finding won't surprise moral psychologists, but its implications cut deeper than the familiar bystander effect. It suggests that our cognitive architecture for moral evaluation systematically discounts responsibility as the number of contributing agents increases—even when each agent's contribution remains necessary for the harm to occur.
Complicity, as a philosophical concept, sits at a fault line between individual moral agency and collective action. Traditional analytic philosophy has treated it largely through the lens of causation and intention: you are complicit to the degree that you knowingly contribute to wrongdoing. But experimental work over the past decade has revealed that ordinary moral cognition operates on a far messier set of heuristics. Proximity, volition, causal structure, group membership, and even the sheer number of co-actors warp complicity judgments in ways that resist neat philosophical formalization.
This matters beyond the seminar room. Complicity judgments undergird legal doctrines of conspiracy and corporate liability, public attitudes toward institutional wrongdoing, and emerging frameworks for AI accountability. If our intuitive sense of shared blame follows predictable psychological patterns—patterns that sometimes diverge sharply from defensible moral principles—then understanding those patterns becomes essential for designing institutions, laws, and technologies that allocate responsibility more accurately.
Diluted Responsibility: The Arithmetic of Shared Blame
The diffusion of responsibility is one of the most robust findings in social psychology, dating back to Darley and Latané's work on bystander intervention in the late 1960s. But its application to moral judgment—as opposed to motivation to act—has only recently received rigorous experimental treatment. Gerstenberg, Lagnado, and colleagues have shown that when participants evaluate causal contributions to a harmful outcome, increasing the number of contributors reliably reduces the blame assigned to any individual, even when each contribution is independently necessary.
This effect follows what we might call a subadditive pattern. If one person causes a harm, they receive near-maximal blame. Add a second contributor, and each receives roughly 60–65% of the original blame—not 50%. Add five contributors, and each receives around 30–40%. The total blame assigned to the group consistently falls short of 100%, meaning that collective wrongdoing generates a kind of moral surplus—harm that goes unattributed to anyone in particular.
Joshua Greene's dual-process framework helps explain this. The automatic, affect-driven system of moral evaluation appears to anchor on individual agents. When confronted with multiple agents, the system struggles to distribute the emotional weight of condemnation proportionally. The result is a systematic under-punishment problem: the more distributed the wrongdoing, the less total moral censure it attracts.
Crucially, this diffusion is not uniform across all dimensions of moral evaluation. Research by Cushman and colleagues suggests that judgments of wrongness—evaluations of the act itself—remain relatively stable regardless of how many people participate. It is specifically judgments of blame and punishment deservingness that dilute. This dissociation matters enormously: it means people can simultaneously recognize that a collective act is deeply wrong while failing to hold anyone adequately accountable for it.
The philosophical upshot is uncomfortable. Many real-world moral catastrophes—climate change, systemic discrimination, financial crises—are precisely the kinds of distributed harms where this cognitive bias operates most powerfully. Our evolved moral psychology may be least equipped to handle the forms of wrongdoing that matter most in complex, interconnected societies.
TakeawayWhen blame is shared, it doesn't simply divide—it partially evaporates. Our moral cognition systematically under-accounts for collective harm, creating a psychological loophole that large-scale wrongdoing exploits by default.
Causal and Volitional Contributions: Not All Complicity Is Equal
Philosophical theories of complicity have long distinguished between different modes of contribution—aiding, abetting, facilitating, commanding, encouraging. Experimental moral psychology now provides evidence that ordinary moral cognition tracks these distinctions, but not always in the ways philosophers have assumed. A series of studies by Cushman, Knobe, and Sinnott-Armstrong demonstrates that two factors dominate complicity judgments: the causal directness of one's contribution and the volitional quality of one's involvement.
Causal directness operates as a gradient. Participants assign significantly more blame to someone who physically delivers a harmful substance than to someone who manufactures it, and more to the manufacturer than to someone who merely provides funding—even when all three contributions are equally necessary for the harm. This tracks with Cushman's broader finding that moral cognition privileges proximate causation over distal causation, likely because proximate causes are more perceptually salient and more reliably linked to harmful intent in ancestral environments.
The volitional dimension is equally powerful but operates somewhat independently. Agents who freely choose to participate in collective wrongdoing receive substantially more blame than those who are coerced, pressured, or merely negligent—even when the causal contribution is identical. Importantly, research by Young and Saxe using fMRI data shows that volitional assessments recruit the temporoparietal junction, a region strongly associated with mental state attribution. This suggests that judging complicity is not merely a causal reasoning task but a deeply mentalistic one: we evaluate not just what someone did, but how deliberately they entered into the collective action.
Where this becomes philosophically fraught is in cases of structural complicity—participation in harmful systems through routine economic or institutional behavior. Buying products made with exploitative labor, paying taxes that fund unjust wars, investing in environmentally destructive industries. Here, the causal contribution is maximally distal and the volitional quality is ambiguous. Predictably, experimental work shows that participants assign minimal blame in such scenarios, even when they acknowledge the systemic harm.
This creates a paradox for moral theory. The forms of complicity that arguably generate the most aggregate harm—diffuse, structural, embedded in ordinary life—are precisely the forms our moral psychology is worst at detecting and condemning. Philosophers like Iris Marion Young have argued for a social connection model of responsibility that bypasses individual blame in favor of collective obligation. The empirical data suggest why such a model is psychologically difficult to internalize, even when it is philosophically compelling.
TakeawayOur moral cognition evaluates complicity through two dominant lenses—how directly you caused the harm and how voluntarily you participated. This means the most pervasive forms of structural complicity fall into a psychological blind spot precisely because they are causally indirect and volitionally ambiguous.
Corporate Moral Responsibility: Can Collectives Be Guilty?
The question of whether collective entities—corporations, governments, institutions—can bear moral responsibility in their own right, rather than merely as aggregates of individual agents, has been debated since at least Peter French's influential 1979 paper on corporate moral agency. Experimental philosophy has recently brought empirical traction to this debate. Studies by Knobe and Prinz, and later by Sytsma and Machery, show that ordinary moral cognition does spontaneously attribute intentionality and blame to corporate entities, and does so in ways that cannot be reduced to judgments about individual members.
This is more than a folk psychological curiosity. When participants in Knobe's side-effect experiments evaluate corporate actions, they exhibit the same asymmetry found with individual agents: harmful side effects are judged as intentional far more often than beneficial ones. But the effect is often stronger for corporate actors, suggesting that people hold collective entities to a higher standard of foresight and care. The underlying psychological mechanism appears to involve what Waytz and Young have called mind perception of groups—a tendency to attribute a unified mental life to coordinated collectives, particularly when those collectives display consistent patterns of behavior.
The neuroscience of this attribution is revealing. Imaging studies show that evaluating corporate moral responsibility activates the medial prefrontal cortex and right temporoparietal junction—regions associated with mentalizing about individual agents. The brain, it seems, recruits the same theory-of-mind architecture for corporate blame as for individual blame, which may explain why corporate responsibility judgments feel psychologically real even to those who philosophically doubt that corporations have minds.
This has direct implications for machine ethics and AI governance. As artificial agents become increasingly autonomous—making decisions about lending, sentencing, medical triage—the question of collective versus individual responsibility becomes urgent. If an AI system trained on biased data produces discriminatory outcomes, who is complicit? The developers, the deploying organization, the data providers, the regulators who failed to intervene? Experimental evidence suggests that people will naturally gravitate toward blaming the most visible, unified entity—typically the corporation—while under-attributing responsibility to the diffuse network of contributors.
The philosophical challenge is to construct frameworks of collective responsibility that correct for these cognitive biases without abandoning the legitimate insight that collective entities can bear irreducible moral obligations. List and Pettit's account of group agency—grounded in the capacity for rational decision-making at the organizational level—offers one promising direction. The empirical psychology of complicity suggests that such frameworks will need to be institutionally enforced rather than left to spontaneous moral intuition, because our intuitions predictably misallocate blame in complex organizational contexts.
TakeawayOur brains treat corporations as moral agents using the same cognitive machinery we use for individual people—which means corporate blame feels psychologically genuine but inherits all the biases and blind spots of person-directed moral judgment.
The experimental study of complicity reveals a consistent pattern: human moral cognition was forged for face-to-face interactions among small groups, and it struggles systematically when confronted with distributed agency, structural causation, and collective actors. Responsibility diffuses, causal distance breeds indifference, and corporations absorb blame that might be more accurately distributed—or more accurately multiplied.
These are not merely academic observations. They have immediate consequences for how we design legal systems, corporate governance structures, and AI accountability frameworks. Any institution that relies solely on intuitive moral judgment to allocate responsibility for collective wrongdoing will predictably produce under-attribution of blame and under-deterrence of harm.
The path forward requires what we might call moral engineering—the deliberate construction of institutional mechanisms that compensate for the known biases in complicity cognition. Understanding the psychology is the first step. Refusing to defer to it uncritically is the second.