Classical decision theory rests on a principle so fundamental it barely warrants stating: your preference between two options should not change because a third, inferior option enters the set. This is the independence of irrelevant alternatives—a cornerstone of rational choice since its formalization by Arrow and its adoption in expected utility frameworks. And yet it is routinely, systematically violated by actual human choosers.

The decoy effect, the compromise effect, the similarity effect—these context-dependent preference reversals are not laboratory curiosities confined to undergraduate subject pools. They appear across consumer choice, clinical judgment, and policy evaluation. They are robust, replicable, and deeply inconvenient for any framework that treats preference as a stable readout of fixed underlying utilities. The question confronting formal decision theory is not whether context shapes preference—the empirical record settled that decades ago—but what computational architecture makes such context sensitivity inevitable.

Three theoretical frameworks have converged on this problem, each grounded in distinct assumptions about how the mind represents and evaluates alternatives. Multialternative Decision Field Theory models choice as dynamic competitive accumulation shaped by lateral inhibition. Divisive normalization imports a canonical neural computation to explain context-dependent value rescaling. And the distinction between local and global comparison architectures reveals how the very scope of evaluation generates qualitatively different violation patterns. Together these models point toward a striking conclusion: context dependence is not a defect in the decision machinery—it is an unavoidable signature of how resource-constrained neural systems encode value.

Multialternative Decision Field Theory

Decision Field Theory, originally formulated by Busemeyer and Townsend, models preferential choice as a sequential sampling process. Preference strength for each option accumulates stochastically over deliberation time, driven by momentary comparisons across attribute dimensions. When one option's accumulated preference crosses a threshold, deliberation terminates. The multialternative extension—MDFT—generalizes this to choice sets of any size by introducing a feedback matrix that fundamentally transforms the accumulation dynamics.

The feedback matrix is MDFT's theoretical core. It specifies how each option's accumulated preference at one moment influences every other option's preference at the next. Crucially, this influence takes the form of distance-dependent lateral inhibition: options occupying nearby positions in multidimensional attribute space suppress each other more strongly than distant options. The matrix transforms what would otherwise be independent accumulation into a coupled dynamical system—the trajectory of every option depends on the current state of every other.

At each moment during deliberation, attention stochastically samples one attribute dimension, and the system computes momentary valences reflecting how each alternative compares to the field on that dimension. These valences feed into the coupled accumulation process. The stochastic attention shifting introduces variability into preference trajectories, but it is the deterministic inhibition structure—not the noise—that carries the primary explanatory burden for context effects.

Consider the asymmetric dominance effect. A decoy D, dominated by target A but not by competitor B, occupies a position near A in attribute space. Proximity-driven inhibition means D and A suppress each other strongly. But A consistently earns positive valence over D—dominance guarantees this across all attended dimensions. So A's preference accumulates despite the mutual suppression, while D's activation collapses. As D declines, its inhibitory pressure on A dissipates, effectively releasing A from a suppression that B never experienced. This asymmetry in inhibitory relief drives A's choice probability above B's—the classic decoy effect, produced without any dedicated dominance-detection mechanism.

MDFT's theoretical elegance lies in generating all three classical context effects from a single architecture. The similarity effect emerges when a newcomer close to B splits inhibition, disadvantaging B relative to distant A. The compromise effect arises when a middle option receives moderate inhibition from both extremes rather than concentrated suppression from one. No separate modules, no effect-specific parameters—just different geometric placements of the added alternative activating different patterns in the same inhibitory dynamics. Context dependence emerges as the inevitable product of competitive evaluation unfolding over time.

Takeaway

Context effects are not three separate puzzles requiring three separate explanations. They are different geometric signatures of a single competitive inhibition process—where adding an option restructures the entire landscape of mutual suppression among all alternatives.

Divisive Normalization

Divisive normalization is among the most canonical computations identified in neuroscience. First characterized by Heeger in primary visual cortex, the operation is deceptively simple: a neuron's response to its preferred stimulus is divided by the pooled activity of a surrounding neural population, plus a baseline constant. This computation has since been documented across sensory modalities, cortical regions, and species—suggesting it reflects a fundamental principle of how neural systems represent information under metabolic and bandwidth constraints.

Louie and Glimcher's critical insight was extending this principle from sensory coding to value representation. Under their framework, the neural signal encoding the subjective value of any option is not computed in isolation. Each option's value is divided by the aggregate value of all options currently available, plus a semi-saturation constant. The result is a normalized value signal representing not absolute worth but relative worth given what else is on offer. Value becomes inherently contextual at the level of neural coding itself.

The implications for context effects follow directly from the mathematics of division. Adding a new option to the choice set increases the denominator, compressing all normalized value signals. But the compression is not uniform across options. How much each option's normalized value shifts depends on its raw value relative to the newcomer's and to the semi-saturation parameter. This non-uniform compression can reverse the ordering of normalized value differences between existing options—producing a preference reversal without any change in the options' objective attributes.

The decoy effect under this account works through differential compression. Introducing a decoy positioned near option A in attribute space increases the normalization pool in a way that asymmetrically affects the relative value signals for A and B. The specific pattern of compression depends on the model's parameterization, but for a range of empirically realistic values the normalization generates the observed boost to A's choice share. The mechanism requires no strategic reasoning, no dominance detection, no explicit comparison of comparisons. It falls directly out of how neurons encode magnitude.

What distinguishes divisive normalization from purely cognitive models is its anchorage in identified neural substrates. The framework predicts that value-coding neurons in ventromedial prefrontal cortex and lateral intraparietal area should exhibit suppressed firing rates as the choice set expands, with suppression proportional to total value present. Neurophysiological data from primate electrophysiology and human neuroimaging have confirmed these predictions. This empirical grounding gives the model a form of construct validity that behavioral-level frameworks cannot claim. Context dependence, under this account, is the expected signature of an efficient coding scheme evolved to represent relative rather than absolute value.

Takeaway

The brain does not assign value on a fixed internal scale. It recalibrates relative to everything currently on offer—meaning every option you perceive silently reshapes the neural representation of every other option in the set.

Local vs. Global Comparison Architectures

Both MDFT and divisive normalization share a critical architectural assumption: all options in the choice set simultaneously influence the evaluation of each alternative. Value is computed with respect to the entire set at once. But a substantial body of theoretical and empirical work suggests an alternative: that decision-makers sometimes decompose choice sets into pairwise evaluations, comparing options two at a time and aggregating these local assessments into a final preference ordering.

This distinction between local and global comparison is not a mere modeling convenience—it generates qualitatively different predictions about which context effects emerge, their relative magnitudes, and the conditions under which they appear or vanish. The architecture of comparison, not just the content being compared, becomes a determinant of choice output.

Under global comparison, all options serve simultaneously as the reference class. This architecture naturally produces the compromise effect: an option with moderate attribute values benefits from being non-extreme when the entire set defines the evaluative context. It also supports the asymmetric dominance effect through the mechanisms already described—the decoy's presence restructures the global evaluation landscape, whether through inhibition dynamics in MDFT or value compression in normalization models.

Local comparison operates on fundamentally different principles. In tournament-style frameworks and reason-based choice architectures, the decision-maker evaluates pairs of options and tallies dimensional advantages. Here the similarity effect emerges with particular naturalness: two options close in attribute space split pairwise contests approximately evenly, each winning about half the time depending on which dimension receives attention. A dissimilar third option wins its pairwise comparisons more decisively and accumulates more total victories. But the asymmetric dominance effect proves far more difficult to produce under pure local comparison—a dominated option loses every pairwise contest and has no mechanism for channeling preference toward its dominator.

Empirical evidence suggests that human decision-makers flexibly shift between architectures depending on task demands. Eye-tracking studies show that small choice sets elicit holistic global scanning patterns, while larger sets prompt attribute-based pairwise processing. Time pressure and cognitive load push evaluation toward local comparison. This flexibility carries a profound implication for modeling: the same options presented to the same individual can produce different preference orderings depending on which comparison architecture the cognitive system deploys. The scope of context is itself context-dependent—a recursive complication suggesting that context effects are as much about the structure of the evaluation process as about the structure of the choice set.

Takeaway

How you compare options matters as much as what you are comparing. The same choice set can produce different preferences depending on whether the mind evaluates holistically or decomposes into pairs—making comparison architecture itself a hidden variable in every decision.

These three frameworks converge on a foundational insight: preference is not a stable retrieval of pre-existing utility but an emergent property of the computational process that constructs it. Whether through competitive inhibition, divisive normalization, or the scope of comparison, the mechanism of evaluation is inseparable from its output.

The theoretical frontier lies in integration. MDFT and divisive normalization offer compatible but distinct accounts of how context enters the computation—one emphasizing temporal accumulation dynamics, the other neural coding efficiency. The local-versus-global distinction adds an architectural variable that neither model fully endogenizes. A unified framework would need to specify not only how options interact within a given comparison structure but when and why the system selects one architecture over another.

For formal decision theory, the implication is direct. If the independence of irrelevant alternatives fails not as a behavioral anomaly but as a predictable consequence of neural computation, then normative theory must either accommodate context sensitivity at its foundations or accept a permanent, principled gap between the rational ideal and the biological reality of choice.