David Rosenthal's higher-order thought theory offers one of the most systematic attempts to explain what makes a mental state conscious. The core claim is elegantly simple: a mental state becomes conscious when, and only when, it is accompanied by a suitable higher-order thought about that state. You are conscious of seeing red not merely because you have the perceptual state, but because you have a thought about that perceptual state.
This representationalist framework attempts to naturalize consciousness by reducing phenomenal awareness to a specific cognitive relation. The appeal is obvious—if consciousness is just a matter of having the right kind of thought about one's mental states, then consciousness becomes tractable within standard cognitive science. No mysterious qualia, no irreducible phenomenal properties, just representational states all the way down.
Yet the theory faces formidable objections that cut to the heart of what we mean by consciousness. The threat of infinite regress looms whenever we ask about the status of the higher-order thoughts themselves. And the implications for animal consciousness raise profound empirical and ethical concerns. Examining these challenges reveals both the explanatory power and the significant limitations of Rosenthal's ambitious proposal.
The Higher-Order Condition
Rosenthal's theory posits a specific mechanism by which consciousness arises. An unconscious mental state—say, a visual perception of blue—becomes conscious when a distinct, contemporaneous higher-order thought represents that state. The higher-order thought must have the content that one is in that very mental state. This isn't mere attention or monitoring; it's a full-fledged thought with propositional structure.
The transformation is supposed to be constitutive rather than merely causal. The higher-order thought doesn't cause consciousness to appear in the target state; rather, having the appropriate higher-order thought just is what it means for the target state to be conscious. Consciousness, on this view, is a relational property—a matter of being represented in the right way.
Crucially, Rosenthal insists the higher-order thought must be non-inferential. If you consciously deduce that you must be in pain based on your behavior, that doesn't make the pain conscious in the relevant phenomenal sense. The higher-order thought must arise spontaneously and immediately, tracking the target state without mediation.
This framework handles certain puzzles elegantly. It explains how we can have unconscious mental states—they simply lack accompanying higher-order thoughts. It accounts for the difference between conscious and unconscious perception, as demonstrated in blindsight cases where patients process visual information without awareness. The theory predicts exactly this dissociation.
The explanatory ambition extends to the qualitative character of experience itself. Rosenthal argues that what it's like to be in a conscious state is entirely determined by how the higher-order thought represents it. If the higher-order thought misrepresents the target state, the conscious experience will track the misrepresentation. This yields the counterintuitive prediction that phenomenal character can come apart from the intrinsic properties of first-order states.
TakeawayConsciousness may not be an intrinsic property of mental states at all, but rather a relational feature—emerging only when mental states become objects of their own representation.
Regress Concerns
The most persistent objection to higher-order theories is the threat of infinite regress. If a mental state M1 becomes conscious by being represented by a higher-order thought HOT1, what about HOT1 itself? Must it too be conscious? If consciousness requires higher-order representation, then HOT1 would need its own higher-order thought HOT2, and so on infinitely.
Rosenthal's response is to deny the premise. Higher-order thoughts need not themselves be conscious to confer consciousness on their targets. The higher-order thought operates, as it were, in the dark—doing its representational work without itself being illuminated by awareness. Only if we attend to the higher-order thought does it become conscious, requiring yet another higher-order thought.
Critics find this response unsatisfying. How can an unconscious thought create consciousness? There seems to be something deeply puzzling about the idea that awareness emerges from states that are themselves entirely lacking in awareness. The metaphor of light being generated by darkness captures the intuitive strangeness.
The problem intensifies when we consider the phenomenology of introspection. When you become aware that you're aware of something, this meta-awareness itself seems to have a distinctive phenomenal character. It feels like something to notice your own consciousness. But if higher-order thoughts are typically unconscious, this phenomenology becomes mysterious.
Some defenders argue the regress objection proves too much—it would apply equally to any representational theory of consciousness. Perhaps the generation of consciousness from non-conscious components isn't uniquely problematic for higher-order theory. The brain, after all, generates consciousness from neurons that are not themselves conscious. Still, the question of how unconscious representations create conscious experience remains philosophically pressing.
TakeawayThe regress problem reveals a deeper puzzle: can consciousness genuinely emerge from components that entirely lack it, or does awareness require awareness at some foundational level?
Animal Consciousness Implications
Higher-order thought theory carries striking implications for animal consciousness. If consciousness requires thoughts about one's own mental states, then creatures lacking the cognitive sophistication for such meta-representation would lack consciousness entirely. This potentially excludes most non-human animals from the realm of phenomenal experience.
Rosenthal acknowledges this implication and accepts it to varying degrees. He suggests that many animals may have rich mental lives without consciousness—processing information, responding to stimuli, even exhibiting complex behavior, all without any accompanying phenomenal awareness. They would be, in effect, sophisticated zombies.
The empirical picture complicates this stance considerably. Research on animal cognition increasingly reveals meta-cognitive capacities in various species. Dolphins demonstrate self-recognition. Corvids engage in what appears to be mental time travel. Great apes show evidence of understanding others' mental states. Where exactly should we draw the line?
The moral implications are profound. If higher-order theory is correct and most animals lack consciousness, then their suffering—in the phenomenal sense—may be illusory. A dog in pain would have nociceptive processing without any experience of pain. This conclusion strikes many as not merely counterintuitive but morally dangerous, potentially licensing indifference to animal welfare.
Defenders might argue we should follow the theory where it leads rather than let moral preferences dictate metaphysics. But the burden of proof seems asymmetric here. Given the scientific evidence for continuity between human and animal cognition, and the massive moral stakes involved, excluding animal consciousness requires extraordinarily strong theoretical justification. Whether higher-order thought theory provides such justification remains deeply contested.
TakeawayTheories of consciousness carry ethical weight—how we define awareness may determine which beings we recognize as capable of suffering and deserving of moral consideration.
Higher-order thought theory represents a serious attempt to demystify consciousness by reducing it to representational relations between mental states. Its explanatory framework handles certain phenomena elegantly—unconscious perception, the gradations of awareness, the possibility of misrepresenting one's own experiences.
Yet the theory's limitations are equally serious. The regress problem, while perhaps not fatal, reveals deep puzzles about how consciousness could emerge from unconscious representations. And the implications for animal consciousness sit uneasily with both empirical evidence and moral intuition.
Perhaps the deepest lesson is methodological. Any theory of consciousness must balance theoretical elegance against phenomenological adequacy and moral seriousness. Rosenthal's framework excels at the former while struggling with the latter. The hard problem persists.