They call it the hard problem of consciousness, but a better term might be the impossible problem of consciousness. The whole point is that the qualitative aspects of our conscious experience, or “qualia,” are inexplicable. They slip through the explanatory framework of science, which is reductive: It explains things by breaking them down into parts and describing how they fit together. Subjective experience has an intrinsic je ne sais quoi that can’t be decomposed into parts or explained by relating one thing to another. Qualia can’t be grasped intellectually. They can only be experienced firsthand.
For the past five years or so, I’ve been trying to untangle the cluster of theories that attempt to explain consciousness, traveling the world to interview neuroscientists, philosophers, artificial-intelligence researchers, and physicists—all of whom have something to say on the matter. Most duck the hard problem, either bracketing it until neuroscientists explain brain function more fully or accepting that consciousness has no deeper explanation and must be wired into the base level of reality.
Although I made it a point to maintain an outsider’s view of science in my reporting, staying out of academic debates and finding value in every approach, I find both positions defensible but dispiriting. I cling to the intuition that consciousness must have some scientific explanation that we can achieve. But how? It’s hard to imagine how science could possibly expand its framework to accommodate the redness of red or the awfulness of fingernails on a chalkboard. But there is another option: to suppose that we are misconstruing our experience in some way. We think that it has intrinsic qualities, but maybe on closer inspection it doesn’t.
Not that this is an easy position to take. Two leading theories of consciousness take a stab at it. Integrated Information Theory (IIT) says that the neural networks in our head are conscious since neurons act together in harmony—they form collective structures with properties beyond those of the individual cells. If so, subjective experience isn’t primitive and unanalyzable; in principle, you could follow the network’s transitions and read its mind. “What IIT tries to do is completely avoid any intrinsic quality in the traditional sense,” the father of IIT, Giulio Tononi, told me.
Another theory, known as predictive coding or predictive processing, reaches the same end by a different route. According to this theory, experiences are predictions we make about the world, and they have a qualitative aspect because we include ourselves in the prediction; qualia are the reasons we use to explain why we react the way we do. When our brain forms a prediction of how we’ll respond to these stimuli, it then does what the philosopher Daniel Dennett of Tufts University has called a “strange inversion”: It ascribes this prediction not to ourselves, but to the thing we’re responding to. Usually, we say that when we feel pain, we seek to avoid whatever is causing it; we say that babies are cute, so we coo; we say that honey is sweet, so we crave it.
But what may really be going on is that we reflexively seek to escape from the things that harm us, and pain is the story we tell ourselves about why; we attribute cuteness to a baby, when it’s really a statement about our own evolved response; we think honey is sweet because we crave it. Pain, cuteness, and sweetness seem unanalyzable to us, but by undoing the strange inversion, we can, in fact, analyze them in terms of our own biology, using the standard relational language of science.
Maybe on closer inspection our experiences don’t have intrinsic qualities.
A leading advocate of predictive coding, the philosopher Andy Clark at the University of Sussex, put it to me this way: “There’s pain because pain is just a simplified way to point to a whole web of dispositions: to move toward or move away from things, to try to avoid those things, take painkillers, all of that stuff. If someone then says, ‘Well, why does the pain hurt?’ I think what we want to say is, ‘Because that’s what hurting just is. That sense of proclivities to move away from, to take painkillers—all the things we do that are distinctive for pain rather than pleasure.’”
These are fascinating theories and may well be right, but I don’t think they are what we need to resolve the hard problem. Like all scientific theories of consciousness, they are told from a third-person perspective, whereas the hard problem of consciousness concerns the first-person perspective. As long as qualia feel intrinsic to us, they still elude scientific description.
So, it seems as though we have to deny that qualia do feel intrinsic to us. Dennett is the most famous proponent of that. His 1991 book Consciousness Explained is one big takedown of qualia. If someone asks you whether you’re conscious, or if you ask yourself, you’ll answer, “Of course, silly,” but maybe that’s just plain wrong, Dennett suggested. After all, when we answer this question, we are reflecting on having been conscious a moment ago, and this retroactive judgment might be a convenient fiction.
The approach is usually called “illusionism,” although the word “illusion” is problematic. An illusion is itself an experience, so it would be circular to suppose that conscious experience is illusory. This objection might be overcome, but even so, telling someone they’re not conscious smacks of philosophical gaslighting. Dennett and other advocates of this approach to consciousness admit they have yet to explain how we could be so badly deluded.
Scarlet Is Like a Trumpet
But there is a less dismissive position that strikes me as promising. It is advocated in various ways by philosopher Kristjan Loorits of the University of Helsinki, psychologist Nao Tsuchiya at Monash University, and others. They suggest that qualia feel intrinsic only because we don’t give them further thought. But we could probe deeper. Introspecting on our experience, we might see that what we take to be intrinsic is relational. Loorits suggested that with artistic training or brain stimulation we could look beneath the intrinsic nature of qualia to see the raw associations that make them up, just as a musician hears the individual components in what, to most fans, is a wall of sound. “It should be possible to experience parts of those underlying structures directly, just as we can learn to experience the individual overtones of a sound,” he said. (Loorits knows whereof he speaks: He was a concert pianist before going into philosophy.)
The proposition, then, is that redness, pain, and the other qualities of experience are a blurred view of a dense thicket of relations. Red is red not because it just is, but because of a vast number of associations that we have learned or been born with. Some, such as mathematician Richard P. Stanley have speculated that all our experiences can be placed into a vast “qualia space,” in which each quale is defined in relation to every other quale. Qualia might not be as utterly unlike one another as they seem.
We could draw on philosophical traditions such as Buddhism.
If redness is an intrinsic quality of experience, you have to see it to know it, whereas if it is relational, a friend could explain red to you—explain it so fully that, when you finally do see something red, you go, “Yup, just what I thought.” Your friend might start by comparing scarlet to the sound of a trumpet. They might also liken the color wheel—in which hues cycle from red to yellow to green to blue to violet to red again—to musical octaves. Through an accumulation of metaphors, they’d communicate to you everything that red means to them, until you achieved the experience of red without ever having seen it for yourself.
From the associations in language, psychologists have found, blind people learn the same color relationships as sighted people do. Helen Keller described understanding sights and sounds by comparison to touch: “Sweet, beautiful vibrations exist for my touch, even though they travel through other substances than air to reach me. So I imagine sweet, delightful sounds, and the artistic arrangement of them which is called music.” Even artificial neural networks, which lack not only vision but also any other form of sensory input that could serve as a reference point, can develop a model of color from a purely linguistic analysis.
This idea is still just an idea—it needs to be developed into a proper theory. But if qualia are relational from a first-person point of view, then they are directly amenable to the methods of science. Entire branches of mathematics specialize in the description of relations. Science wouldn’t need to expand its explanatory repertoire to explain consciousness. Rather, the answers would lie within us. Some of us might learn to dissect qualia through artistic training, others through meditative practice. We could draw on philosophical traditions such as phenomenology and Buddhism. Through greater self-awareness, we would learn to dissolve the hard problem.
Adapted from Putting Ourselves Back in the Equation: Why Physicists Are Studying Human Consciousness and AI to Unravel the Mysteries of the Universe by George Musser. Published by Farrar, Straus and Giroux. Copyright © 2023 by George Musser. All rights reserved. Reproduction of the text, in any form for distribution is strictly prohibited.
Lead image: Zhitkov Boris / Shutterstock