Consider a seemingly simple problem: you must select the optimal route through ten cities, visiting each exactly once. With only ten nodes, the solution space contains over three million possible paths. Add ten more cities and you face more combinations than atoms in the observable universe. This is the traveling salesman problem, and it belongs to a complexity class that has humbled computer scientists for decades. Yet we navigate analogous decisions constantly—choosing careers, allocating resources, selecting from countless options in supermarkets designed to overwhelm.

The orthodox view in economics long held that departures from optimal choice represent failures of rationality—cognitive limitations to be overcome through better education, clearer thinking, or technological augmentation. But computational complexity theory reveals something far more profound: for many problems we face, optimal solutions are not merely difficult to find but provably intractable within any realistic time constraints. The universe will undergo heat death before an optimal algorithm completes.

This insight fundamentally restructures how we understand bounded rationality. Herbert Simon's foundational observation that humans satisfice rather than optimize was initially framed as a concession to our cognitive limitations. Computational complexity theory transforms this interpretation entirely. What appears as cognitive inadequacy is actually computational necessity. Any intelligent system operating under finite resources—biological, silicon, or hypothetical—must abandon optimality for tractability. The question becomes not whether we are rational, but whether we are resource-rational: optimally deploying limited cognitive resources in an intractable world.

NP-Hard Decisions: When Optimality Becomes Impossible

Computational complexity theory partitions problems into classes based on how solution difficulty scales with problem size. Problems in class P can be solved in polynomial time—double the input, and computation time might quadruple. But problems in NP-Hard territory exhibit exponential scaling: double the input, and computation time might increase by factors of billions or more. The traveling salesman problem, Boolean satisfiability, and optimal scheduling all inhabit this intractable realm.

What has remained underappreciated outside theoretical computer science is how many everyday decisions map directly onto NP-Hard problem structures. Consider assembling an optimal portfolio from hundreds of assets with complex correlations. Or scheduling meetings across dozens of participants with varying constraints. Or selecting which features to include in a product given interdependent costs and benefits. These are not exotic mathematical abstractions—they are the texture of professional and personal life.

The implications for decision theory are severe. Classical expected utility maximization assumes agents can identify the optimal action and then execute it. But identification itself requires computation, and for NP-Hard problems, no known algorithm—and likely no possible algorithm—can guarantee optimal solutions in polynomial time. This is not a matter of current technological limitations. It is a fundamental boundary established by the structure of computation itself.

Neuroscience corroborates these theoretical constraints at the biological level. Neural computation operates through electrochemical signaling with inherent speed limits. The human brain, despite its hundred billion neurons and hundred trillion synaptic connections, runs on approximately twenty watts—less than a dim light bulb. This metabolic budget imposes hard constraints on computational throughput. fMRI studies reveal that complex decisions recruit prefrontal regions extensively, with measurable metabolic costs that increase with problem complexity.

The convergence of complexity theory and neuroscience points toward a unified understanding: biological cognition evolved under computational constraints that preclude optimality for most interesting problems. Our brains are not flawed computers failing to implement expected utility maximization. They are sophisticated approximate inference engines that never could have implemented exact optimization in the first place.

Takeaway

Many real-world decisions belong to complexity classes where optimal solutions require more computation than any physical system—biological or artificial—can perform. Recognizing this transforms 'irrational' behavior from failure into mathematical necessity.

Resource-Rational Tradeoffs: Cognition Has Costs

The framework of resource rationality incorporates a crucial variable that classical decision theory ignores: the cost of computation itself. When thinking is free, the optimal strategy is always to think more—to gather additional information, consider more options, refine probability estimates indefinitely. But thinking is never free. Every cognitive operation consumes time, energy, and opportunity costs.

Resource rationality reframes the optimization problem. Rather than asking 'what is the best decision?' it asks 'what is the best decision given the costs of finding better decisions?' This seemingly subtle shift has profound implications. An agent might rationally choose a suboptimal action if the expected improvement from further deliberation is outweighed by deliberation costs. The bounds on rationality become features of the optimization landscape, not bugs in the optimizer.

Formal models of resource rationality draw on concepts from theoretical computer science and information theory. The value of computation—the expected improvement in decision quality from additional processing—must be weighed against its costs. This creates a meta-decision problem: how much to think before acting. Interestingly, this meta-problem is itself computationally hard, leading to hierarchies of bounded approximation.

Empirical work in behavioral economics gains new interpretation under this framework. Consider the well-documented phenomenon of choice overload, where decision quality degrades with increasing options. Classical rationality cannot explain why more options would harm decisions—more information should only help. But resource rationality explains this immediately: the computational cost of evaluating additional options eventually exceeds the expected benefit of finding marginally better choices.

Neuroeconomic studies support resource-rational accounts directly. Anterior cingulate cortex shows activation patterns consistent with tracking the expected value of cognitive control—essentially computing whether further deliberation is worth its metabolic cost. Dopaminergic systems appear to regulate the allocation of cognitive effort based on expected rewards, implementing something like a cost-benefit analysis of thinking itself. The brain does not merely compute decisions; it computes whether to compute.

Takeaway

Cognitive effort is itself a scarce resource that rational agents must allocate strategically. The optimal amount of thinking depends on the costs of thinking—making deliberate cognitive shortcuts not failures of rationality but expressions of it.

Satisficing as Strategy: Herbert Simon's Vindication

Herbert Simon introduced satisficing in the 1950s to describe how decision-makers accept 'good enough' options rather than searching for optimal ones. The concept was revolutionary but initially framed as a description of human limitation—we satisfice because we cannot optimize. Computational complexity theory inverts this framing entirely. We satisfice because satisficing is often the optimal strategy for boundedly computational agents.

Consider the formal structure of satisficing algorithms. The decision-maker establishes an aspiration level—a threshold of acceptability—and searches until finding an option that exceeds this threshold. This approach trades optimality for tractability. Crucially, for many problem structures, satisficing algorithms approximate optimal solutions remarkably well while requiring orders of magnitude less computation.

The mathematics here are illuminating. For certain problem classes, the gap between satisficing and optimizing performance scales logarithmically with computation. This means enormous increases in computational effort yield diminishing improvements in decision quality. A satisficing solution found in seconds might be within a few percent of an optimal solution that would require years to discover. The rational allocation of computational resources often favors quick approximation over exhaustive search.

Ecological rationality—the study of how cognitive strategies match environmental structures—extends these insights. Gerd Gigerenzer and colleagues have demonstrated that simple heuristics often outperform complex optimization strategies in uncertain environments. The recognition heuristic, the take-the-best algorithm, and other 'fast and frugal' methods exploit environmental regularities to achieve excellent performance with minimal computation. These are not cognitive shortcuts that sacrifice accuracy for speed. They are adapted algorithms that leverage problem structure.

The vindication of satisficing has implications extending beyond individual decision-making to artificial intelligence and organizational design. As AI systems confront increasingly complex optimization problems, the lessons of bounded rationality become engineering imperatives. The most sophisticated modern AI systems—from game-playing algorithms to language models—employ approximation methods that trade theoretical optimality for practical tractability. Herbert Simon's insights, born from studying human cognition, now guide the design of artificial minds.

Takeaway

Satisficing—accepting good enough rather than seeking optimal—is not cognitive laziness but a sophisticated algorithm for navigating computational intractability. The best decision is often to stop deciding and act on sufficient information.

The computational perspective on choice dissolves a false dichotomy that has structured debates about rationality for decades. The question was never whether humans are rational or irrational—that framing presupposes that unbounded rationality is achievable and desirable. Computational complexity theory demonstrates that unbounded rationality is impossible for any finite system confronting real-world problems.

This insight carries both humility and validation. Humility, because our most careful deliberation cannot guarantee optimal outcomes in a computationally intractable world. Validation, because our heuristics and approximations are not failures but adaptations—sophisticated algorithms evolved to navigate impossibility. The satisficer is not the optimizer's lesser sibling but the only rational agent that could actually exist.

Resource rationality offers a rigorous framework for evaluating decisions not against an impossible standard of optimality but against the appropriate standard of optimal resource allocation. The question becomes: given the costs of computation and the structure of the problem, did we allocate our cognitive resources wisely? This is a question we can actually answer—and increasingly, one that connects human cognition, artificial intelligence, and the fundamental limits of computation itself.