Nautilus Members enjoy an ad-free experience. or Join now .

Zombies are supposed to be capable of asking any question about the nature of experience. It’s worth wondering, though, how a person or machine devoid of experience could reflect on experience it doesn’t have.Photograph by Ars Electronica / Flickr

The Australian philosopher David Chalmers famously asked whether “philosophical zombies” are conceivable—people who behave like you and me yet lack subjective experience. It’s an idea that has gotten many scholars interested in consciousness, including myself. The reasoning is that, if such zombies, or sophisticated unfeeling robots, are conceivable, then physical properties alone—about the brain or a brain-like mechanism—cannot explain the experience of consciousness. Instead, some additional mental properties must account for the what-it-is-like feeling of being conscious. Figuring out how these mental properties arise has become known as the “hard problem” of consciousness.

Nautilus Members enjoy an ad-free experience. Log in or Join now .

But I have a slight problem with Chalmers’ zombies. Zombies are supposed to be capable of asking any question about the nature of experience. It’s worth wondering, though, how a person or machine devoid of experience could reflect on experience it doesn’t have. In an episode of the “Making Sense” (formerly known as “Waking Up”) podcast with neuroscientist and author Sam Harris, Chalmers addressed this puzzle. “I don’t think it’s particularly hard to at least conceive of a system doing this,” Chalmers told Harris. “I mean, I’m talking to you now, and you’re making a lot of comments about consciousness that seem to strongly suggest that you have it. Still, I can at least entertain the idea that you’re not conscious and that you’re a zombie who’s in fact just making all these noises without having any consciousness on the inside.”

This is not a strictly academic matter—if Google’s DeepMind develops an AI that starts asking, say, why the color red feels like red and not something else, there are only a few possible explanations. Perhaps it heard the question from someone else. It’s possible, for example, that an AI might learn to ask questions about consciousness simply by reading papers about consciousness. It also could have been programmed to ask that question, like a character in a video game, or it could have burped the question out of random noise. Clearly, asking questions about consciousness does not prove anything per se. But could an AI zombie formulate such questions by itself, without hearing them from another source or belching them out from random outputs? To me, the answer is clearly no. If I’m right, then we should seriously consider that an AI might be conscious if it asks questions about subjective experience unprompted. Because we won’t know if it’s ethical to unplug such an AI without knowing if it’s conscious, we better start listening for such questions now.

Nautilus Members enjoy an ad-free experience. Log in or Join now .

The 21st century is in dire need of a Turing test for consciousness.

Our conscious experiences are composed of qualia, the subjective aspects of sensation—the redness of red, the sweetness of sweet. The qualia that compose conscious experiences are irreducible, incapable of being mapped onto anything else. If I were born blind, no one, no matter how articulate, would ever be able to give me a sense of the color blood and roses share. This would be true even if I were among a number of blind people who develop something called blindsight—the ability to avoid obstacles and accurately guess where objects appear on a computer monitor despite being blind.

Blindsight seems to demonstrate that some behaviors can be purely mechanized, so to speak, occurring without any subjective awareness—echoing Chalmers’ notion of zombies. The brains of blindsighted people appear to exploit preconscious areas of the visual system, yielding sighted behavior without visual experience. This often occurs after a person suffers a stroke or other injury to the visual cortex, the part of the cerebral cortex that processes visual information. Because the person’s eyes are still healthy, they may feed information hidden from consciousness to certain brain regions, such as the superior colliculus.

By the same token, there are at least a few documented cases of deaf hearing. One such case, detailed in a 2017 Philosophical Psychology report, is patient LS, a man deaf since birth, yet able to discriminate sounds based on their content. For people such as LS, this discernment occurs in silence. But if a deaf-hearing person were to ask the sort of questions people who can hear ask—“Doesn’t that sound have a weird sort of brassiness to it?”—then we’d have good reason to suspect this person isn’t deaf at all. (We couldn’t be absolutely sure because the question could be a prank.) Likewise, if an AI began asking, unprompted, the sorts of questions only a conscious being could ask, we’d reasonably form a similar suspicion that subjective experience has come online.

Nautilus Members enjoy an ad-free experience. Log in or Join now .

The 21st century is in dire need of a Turing test for consciousness. AI is learning how to drive cars, diagnose lung cancer, and write its own computer programs. Intelligent conversation may be only a decade or two away, and future super-AI will not live in a vacuum. It will have access to the Internet and all the writings of Chalmers and other philosophers who have asked questions about qualia and consciousness. But if tech companies beta-test AI on a local intranet, isolated from such information, they could conduct a Turing-test style interview to detect whether questions about qualia make sense to the AI.

What might we ask a potential mind born of silicon? How the AI responds to questions like “What if my red is your blue?” or “Could there be a color greener than green?” should tell us a lot about its mental experiences, or lack thereof. An AI with visual experience might entertain the possibilities suggested by these questions, perhaps replying, “Yes, and I sometimes wonder if there might also exist a color that mixes the redness of red with the coolness of blue.” On the other hand, an AI lacking any visual qualia might respond with, “That is impossible, red, green, and blue each exist as different wavelengths.” Even if the AI attempts to play along or deceive us, answers like, “Interesting, and what if my red is your hamburger?” would show that it missed the point.

Of course, it’s possible that an artificial consciousness might possess qualia vastly different than our own. In this scenario, questions about specific qualia, such as color qualia, might not click with the AI. But more abstract questions about qualia themselves should filter out zombies. For this reason, the best question of all would likely be that of the hard problem itself: Why does consciousness even exist? Why do you experience qualia while processing input from the world around you? If this question makes any sense to the AI, then we’ve likely found artificial consciousness. But if the AI clearly doesn’t understand concepts such as “consciousness” and “qualia,” then evidence for an inner mental life is lacking.

Building a consciousness detector is no small undertaking. Alongside such a Turing test, tomorrow’s researchers will likely apply today’s abstract theories of consciousness in an effort to infer the existence of consciousness from a computer’s wiring diagram. One such theory considers the amount of information integrated by a brain or other system, and is already being applied to infer the existence of consciousness in brain-injured patients and even schools of fish. Indeed, before the motivation to detect artificial consciousness garners substantial funding for such research, the need to detect consciousness in brain-injured patients has already erased the C-word from science’s taboo list.

Nautilus Members enjoy an ad-free experience. Log in or Join now .

My own lab, led by Martin Monti at the University of California, Los Angeles, strives to improve the lives of brain-injured patients by developing better means of inferring consciousness from electrical or metabolic brain activity. Just as ethical tragedies arise when we pull the plug on patients who are aware yet unresponsive, similar tragedies will arise if we pull the plug on artificial consciousness. And just as my lab at UCLA relates theoretical measures of consciousness to the hospital bed behavior of brain-injured patients, future researchers must relate theoretical measures of artificial consciousness to an AI’s performance on something akin to a Turing test. When we close the textbook at the day’s end, we still need to consider the one question zombies can’t answer.

Joel Frohlich is a postdoctoral researcher studying consciousness in the laboratory of Martin Monti at the University of California, Los Angeles. He received his PhD in neuroscience in the laboratory of Shafali Jeste at UCLA while studying biomarkers of neurodevelopmental disorders. He is the editor in chief of the science-communication website “Knowing Neurons.” 

WATCH: How to explain consciousness as a system of integrated information.

Nautilus Members enjoy an ad-free experience. Log in or Join now .
close-icon Enjoy unlimited Nautilus articles, ad-free, for as little as $4.92/month. Join now

! There is not an active subscription associated with that email address.

Join to continue reading.

Access unlimited ad-free articles, including this one, by becoming a Nautilus member. Enjoy bonus content, exclusive products and events, and more — all while supporting independent journalism.

! There is not an active subscription associated with that email address.

This is your last free article.

Don’t limit your curiosity. Access unlimited ad-free stories like this one, and support independent journalism, by becoming a Nautilus member. $9.99/month. Cancel anytime.