In 2020, a landmark adversarial collaboration funded by the Templeton Foundation pitted integrated information theory against global neuronal workspace theory in a direct empirical showdown. But lurking behind both competitors was a family of theories that has quietly shaped the entire debate: higher-order theories of consciousness. These theories propose something counterintuitive—that a mental state becomes conscious not through its own intrinsic properties, but because another mental state is directed at it. You don't just see red; you have a thought about your seeing of red, and that second-order representation is what makes the experience conscious.

The idea has deep roots, stretching from Aristotle's remarks on perceiving that we perceive, through Locke's reflection, to contemporary formulations by David Rosenthal, Richard Brown, and Hakwan Lau. What gives higher-order theories their current urgency is that they make surprisingly specific claims about neural architecture—claims that can, in principle, be tested. Prefrontal cortex activity, metacognitive access, the dissociation between processing and awareness: these are not merely philosophical puzzles anymore. They are active research programs generating data that either vindicate or threaten the higher-order framework.

Yet the stakes extend well beyond neuroscience. If consciousness genuinely requires a capacity to represent one's own mental states, then the question of who or what can be conscious narrows dramatically. Infants, non-human animals, and current AI systems may fall outside the circle—not because they lack information processing, but because they lack the right kind of self-directed cognition. This article examines the architecture of higher-order theories, the neural evidence bearing on them, and the profound implications for how broadly we draw the boundaries of conscious experience.

HOT Versus HOP: Two Architectures for Meta-Representation

Higher-order theories agree on a core thesis: a mental state M is conscious when and only when the subject has an appropriate higher-order representation of M. But they diverge sharply on the nature of that representation. Higher-Order Thought (HOT) theory, most rigorously developed by David Rosenthal, holds that what makes a state conscious is a concurrent thought—a conceptual, propositional representation—that one is in that state. On this account, consciousness is fundamentally intellectual. You experience the redness of a sunset because you entertain the thought, however fleeting, that you are currently having a red-ish visual experience.

Higher-Order Perception (HOP) theory, by contrast, posits a quasi-perceptual inner sense. Drawing on Locke's notion of reflection and more recently defended by William Lycan, HOP theories suggest we have something analogous to an internal monitor—a sense organ directed at our own mental states rather than the external world. The higher-order representation here is not propositional but perceptual in character, more like an inner scanning process than an inner assertion.

The mechanistic commitments diverge in testable ways. HOT theories predict that disrupting conceptual and executive processes should impair consciousness even when first-order sensory processing remains intact. HOP theories predict that consciousness should be vulnerable to disruption of monitoring circuits that need not be fully conceptual. Rosenthal has argued that HOP theories inherit all the problems of sense-datum theories—introducing a problematic intermediary between the subject and the first-order state—while HOT theories avoid this by treating the higher-order representation as transparent to its target.

Recent empirical work has complicated both positions. Hakwan Lau and Richard Brown's perceptual reality monitoring theory represents a sophisticated hybrid: it retains HOT theory's emphasis on prefrontal involvement but reconceptualizes the higher-order representation as a generative model that shapes the phenomenal character of experience from the top down. On their account, consciousness is not merely reported by higher-order states; it is partially constituted by them, because the brain's best guess about the source and reliability of a signal determines how—and whether—that signal shows up in awareness.

This shift matters because it addresses a longstanding objection: that HOT theories make consciousness epiphenomenal to first-order processing. If higher-order representations actively modulate perceptual content through predictive coding mechanisms, then they are not idle commentators but active participants in shaping phenomenal experience. The question becomes whether the empirical signatures of this top-down modulation can be cleanly separated from those predicted by first-order theories—a question that remains genuinely open.

Takeaway

The distinction between thinking about your experience and perceiving your experience is not just philosophical wordplay—it generates different predictions about which brain circuits must be intact for consciousness to exist.

Prefrontal Necessity: What Lesion and No-Report Studies Reveal

If higher-order theories are correct, the prefrontal cortex should be indispensable for conscious experience. This is their most exposed empirical flank. HOT theories, in particular, implicate the dorsolateral and ventromedial prefrontal regions—areas associated with metacognition, executive control, and self-referential processing—as critical substrates. Early neuroimaging studies seemed to confirm this: prefrontal activation reliably accompanied the transition from unconscious to conscious perception in masking paradigms, binocular rivalry, and attentional blink experiments.

Then came the challenges. Lesion studies by Philipp Sterzer, Lau, and others revealed that patients with extensive prefrontal damage often report surprisingly intact conscious experience. They see, hear, and feel. Their subjective reports, while sometimes impoverished in metacognitive precision, do not suggest the wholesale absence of phenomenal awareness that strict HOT theories would predict. Penfield's classic observation that direct stimulation of prefrontal cortex rarely produces experiential phenomena—unlike stimulation of sensory cortices—added further discomfort.

The counterargument from higher-order theorists is subtle. Lau has proposed that prefrontal involvement in consciousness operates through signal detection mechanisms rather than brute activation. On this view, what matters is not whether prefrontal cortex lights up on an fMRI scan, but whether prefrontal circuits modulate the gain and precision of first-order representations. Damage to prefrontal cortex might degrade the quality and metacognitive accessibility of conscious experience without eliminating it entirely—a prediction consistent with the graded deficits observed in patients.

No-report paradigms have further destabilized the debate. In standard consciousness experiments, subjects must report their experience—pressing a button, saying a word—and this act of reporting itself engages prefrontal and executive circuits. Researchers like Lucia Melloni and colleagues have shown that when reporting demands are removed, much of the prefrontal activation associated with consciousness vanishes. This suggests that earlier studies may have been measuring the neural correlates of reporting consciousness rather than consciousness itself—a conflation that higher-order theorists must take seriously.

Higher-order theorists have responded by drawing a distinction between the neural correlates of consciousness (NCC) in the strict sense and the neural prerequisites and consequences of consciousness. Reporting is a consequence; the higher-order representation itself, they argue, may be realized in more circumscribed prefrontal or prefrontal-parietal circuits that survive no-report paradigms. The debate is now at a genuinely empirical impasse, with both sides designing increasingly precise experiments to isolate the minimal neural sufficient conditions for phenomenal experience. What is clear is that the simple equation of prefrontal activity with consciousness has been irrevocably complicated.

Takeaway

The brain region most associated with 'thinking about thinking' may be less essential to raw conscious experience than higher-order theories initially assumed—forcing a recalibration of how we distinguish the mechanisms of awareness from the mechanisms of reporting awareness.

Animal Consciousness Implications: Where the Boundaries Sharpen

Perhaps no consequence of higher-order theories provokes more resistance than their apparent implications for animal consciousness. If consciousness requires a higher-order representation of one's own mental states—something approaching a thought about a thought—then creatures without sophisticated metacognitive capacities may lack phenomenal experience altogether. A dog might process pain signals, withdraw its paw from a flame, and exhibit behavioral distress, yet on a strict HOT reading, there would be nothing it is like for the dog to experience that pain. The lights would be off.

This strikes many researchers as absurd—a reductio ad absurdum of the theory rather than a legitimate prediction. The Cambridge Declaration on Consciousness (2012), signed by a prominent group of neuroscientists, affirmed that many non-human animals possess the neurological substrates sufficient for conscious experience. Comparative neuroanatomy reveals that mammals, birds, and even some invertebrates possess the thalamocortical or pallial circuits implicated in first-order theories of consciousness. The behavioral evidence—from pain avoidance to play to apparent empathy—is voluminous.

Higher-order theorists have developed several responses. One strategy, pursued by Rosenthal, is to bite the bullet partially: perhaps some animals with relatively developed prefrontal-like structures (great apes, cetaceans, corvids) do have rudimentary higher-order representations and thus rudimentary consciousness, while simpler organisms genuinely lack it. This creates a graduated spectrum rather than a binary divide, but it still excludes a vast swath of the animal kingdom from the circle of conscious beings.

A more conciliatory approach, explored by Joseph LeDoux and Brown, separates the claim about what consciousness is from the claim about what consciousness requires neurally. They argue that higher-order theories identify the computational structure of consciousness—meta-representation—without dictating that it must be realized in mammalian prefrontal cortex. Different neural architectures might implement functionally equivalent higher-order processes. A bird's nidopallium caudolaterale, which is anatomically distinct from mammalian prefrontal cortex but functionally analogous, might support the relevant kind of self-monitoring.

This move is intellectually honest but introduces a tension. If higher-order representation can be realized in vastly different neural substrates—and if we become increasingly liberal about what counts as a higher-order representation—the theory risks losing its empirical teeth. It becomes compatible with nearly any attribution of consciousness, which is another way of saying it predicts nothing specific. The challenge for higher-order theorists is to specify the functional signature of the relevant meta-representation precisely enough that we can test for its presence across species without simply gerrymandering the definition to get the answers we antecedently prefer.

Takeaway

Higher-order theories force us to confront an uncomfortable possibility: that our moral intuitions about animal suffering may outrun the evidence, and that a theory of consciousness should be judged by its explanatory power, not by whether its conclusions make us comfortable.

Higher-order theories of consciousness remain among the most architecturally precise frameworks in consciousness science. They make clear claims about what consciousness requires—meta-representation—and those claims generate predictions about neural substrates, clinical deficits, and the distribution of consciousness across species. That precision is both their strength and their vulnerability.

The empirical landscape is shifting beneath them. No-report paradigms erode the prefrontal-consciousness equation. Animal cognition research reveals metacognitive capacities in unexpected species. And hybrid models like Lau and Brown's perceptual reality monitoring theory are quietly absorbing insights from predictive processing frameworks, blurring the boundary between higher-order and first-order accounts.

What remains genuinely valuable in the higher-order tradition is its insistence that not all information processing is conscious and that the difference requires explanation. Whether that explanation ultimately rests on thoughts about thoughts, inner perception, or generative models of signal reliability, the question it forces us to answer—what makes the difference between processing and experience—is the hard problem wearing empirical clothes.