ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. or Join now .
Sign up for the free Nautilus newsletter:
science and culture for people who love beautiful writing.
NL – Article speedbump

Artificial intelligence is moving fast. We can now converse with large language models such as ChatGPT as if they were human beings. Vision models can generate award-winning photographs as well as convincing videos of events that never happened. These systems are certainly getting smarter, but are they conscious? Do they have subjective experiences, feelings, and conscious beliefs in the same way that you and I do, but tables and chairs and pocket calculators do not? And if not now, then when—if ever—might this happen?

Nautilus Members enjoy an ad-free experience. Log in or Join now .

While some researchers suggest that conscious AI is close at hand, others, including me, believe it remains far away and might not be possible at all. But even if unlikely, it is unwise to dismiss the possibility altogether. The prospect of artificial consciousness raises ethical, safety, and societal challenges significantly beyond those already posed by AI. Importantly, some of these challenges arise even when AI systems merely seem to be conscious, even if, under the hood, they are just algorithms whirring away in subjective oblivion.

We think we’re intelligent, know we’re conscious, and so assume the two go together.

Because of these concerns, I signed an open letter put together by the Association for the Mathematical Study of Consciousness (AMCS). Following hot on the heels of the much-publicized call to pause large-scale AI research, the letter argues that it is vital for public, industry, and governing bodies to understand whether and how AI systems could become conscious, to consider the implications, and to address the dangers. Around the same time, Anka Reuel of Stanford University and Gary Marcus, a leading voice on AI, sensibly called for the establishment of a global, neutral, and non-profit “international agency for AI” to coordinate global regulation of AI technologies. I think the remit of such an agency should cover artificial consciousness as well. Last week, Geoffrey Hinton, one of AI’s pioneers, resigned as Google’s chief scientist to join the chorus of concern, having changed his mind about the immediacy and reality of the threats posed by the technology he helped develop. In my opinion, we should not even be trying to build conscious machines.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

To get a handle on these challenges—and to clarify the confusing and hype-ridden debate around AI and consciousness—let’s start with some definitions. First, consciousness. Although precise definitions are hard to come by, intuitively we all know what consciousness is. It is what goes away under general anesthesia, or when we fall into a dreamless sleep, and what returns when we come round in the recovery room or wake up. And when we open our eyes, our brains don’t just process visual information; there’s another dimension entirely: Our minds are filled with light, color, shade, and shapes. Emotions, thoughts, beliefs, intentions—all feel a particular way to us.

As for intelligence, there are many available definitions, but all emphasize the ability to achieve goals in flexible ways in varied environments. Broadly speaking, intelligence is the capacity to do the right thing at the right time.

These definitions are enough to remind us that consciousness and intelligence are very different. Being intelligent—as humans think we are—may give us new ways of being conscious, and some forms of human and animal intelligence may require consciousness, but basic conscious experiences such as pleasure and pain might not require much species-level intelligence at all.

In Body Image
FLESH AND BLOOD: Being conscious is not the result of some complicated algorithm running on the wetware of the brain. It is rooted in the fundamental biological drive within living organisms to keep on living. Photo by NadyaEugene / Shutterstock.
ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

This distinction is important because many in and around the AI community assume that consciousness is just a function of intelligence: that as machines become smarter, there will come a point at which they also become aware—at which the inner lights come on for them. Last March, OpenAI’s chief scientist Ilya Sutskever tweeted, “It may be that today’s large language models are slightly conscious.” Not long after, Google Research vice president Blaise Agüera y Arcas suggested that AI was making strides toward consciousness.

These assumptions and suggestions are poorly founded. It is by no means clear that a system will become conscious simply by virtue of becoming more intelligent. Indeed, the assumption that consciousness will just come along for the ride as AI gets smarter echoes a kind of human exceptionalism that we’d do well to see the back of. We think we’re intelligent, and we know we’re conscious, so we assume the two go together.

Recognizing the weakness of this assumption might seem comforting because there would be less reason to think that conscious machines are just around the corner. Unfortunately, things are not so simple. Even if AI by itself won’t do the trick, engineers might make deliberate attempts to build conscious machines—indeed, some already are.

Here, there is a lot more uncertainty. Although the last 30 years or so have witnessed major advances in the scientific understanding of consciousness, much remains unknown. My own view is that consciousness is intimately tied to our nature as living flesh-and-blood creatures. In this picture, being conscious is not the result of some complicated algorithm running on the wetware of the brain. It is an embodied phenomenon, rooted in the fundamental biological drive within living organisms to keep on living. If I’m right, the prospect of conscious AI remains reassuringly remote.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

But I may be wrong, and other theories are a lot less restrictive, with some proposing that consciousness could arise in computers that process information in particular ways or are wired up according to specific architectures. If these theories are on track, conscious AI may be uncomfortably close—or perhaps even among us already.

This lack of consensus about consciousness, when set against the rapidly changing landscape of AI, highlights the need for more research into consciousness itself. Without a principled and experimentally verified understanding of how consciousness happens, we’ll be unable to say for sure when a machine has—or doesn’t have—it. In this foggy situation, artificial consciousness may even arise accidentally, perhaps as a byproduct of some other functionality the tech industry installs in the next generation of their algorithms.

There are two main reasons why creating artificial consciousness, whether deliberately or inadvertently, is a very bad idea. The first is that it may endow AI systems with new powers and capabilities that could wreak havoc if not properly designed and regulated. Ensuring that AI systems act in ways compatible with well-specified human values is hard enough as things are. With conscious AI, it gets a lot more challenging, since these systems will have their own interests rather than just the interests humans give them.

 Attributing humanlike consciousness to AI leads to unjustified assumptions.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

The second reason is even more disquieting: The dawn of conscious machines will introduce vast new potential for suffering in the world, suffering we might not even be able to recognize, and which might flicker into existence in innumerable server farms at the click of a mouse. As the German philosopher Thomas Metzinger has noted, this would precipitate an unprecedented moral and ethical crisis because once something is conscious, we have a responsibility toward its welfare, especially if we created it. The problem wasn’t that Frankenstein’s creature came to life; it was that it was conscious and could feel.

These scenarios might seem outlandish, and it is true that conscious AI may be very far away and might not even be possible. But the implications of its emergence are sufficiently tectonic that we mustn’t ignore the possibility. Certainly, nobody should be actively trying to create machine consciousness.

Existential concerns aside, there are more immediate dangers to deal with as AI has become more humanlike in its behavior. These arise when AI systems give humans the unavoidable impression that they are conscious, whatever might be going on under the hood. Human psychology lurches uncomfortably between anthropocentrism—putting ourselves at the center of everything—and anthropomorphism—projecting humanlike qualities into things on the basis of some superficial similarity. It is the latter tendency that’s getting us in trouble with AI.

Many people, including some experts, are already projecting sophisticated cognitive competences into AI systems—large language models in particular—on the basis of largely anecdotal and frankly sketchy evidence. But do these models really understand anything? Do they—as has been claimed—have a theory of mind (the ability to attribute mental states, such as beliefs and desires, to others)? Claims that language models possess these capabilities usually rest on suggestive pieces of dialogue. This sort of evidence is remarkably weak, as any psychologist will tell you. It is even weaker when we seek to extrapolate from human experience to a machine. Although humans would need to be cognitively sophisticated to engage in some of the dialogue that chatbots are now capable of, the same conclusion does not hold for AI. Language models may well be able to participate in sophisticated linguistic interactions without understanding anything at all.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

The lack of true understanding in language models is revealed by their tendency to confabulate: to make stuff up, spouting nonsense in confident language. When I asked Open-AI’s GPT-4 to write a biography of me, it stated wrongly that I was born in London. When I asked it to do it again with fewer errors in dates and places, it got things even more wrong, saying I was born in Hammersmith, London—a revealing answer since being more specific is more likely to be wrong, as anybody who understands anything knows.

Future language models won’t be so easy to catch out. Before long, they may give us the seamless and impenetrable impression of understanding and knowing things, regardless of whether they do. As this happens, we may also become unable to avoid attributing consciousness to them too, suckered in by our anthropomorphic bias and our inbuilt inclination to associate intelligence with awareness.

Systems like this will pass the so-called Garland Test, an idea which has passed into philosophy from Alex Garland’s perspicuous and beautiful film Ex Machina. This test reframes the classic Turing Test—usually considered a test of machine intelligence—as a test of what it would take for a human to feel that a machine is conscious, even given the knowledge that it is a machine. AI systems that pass the Garland test will subject us to a kind of cognitive illusion, much like simple visual illusions in which we cannot help seeing things in a particular way, even though we know the reality is different.

This will land society into dangerous new territory. By wrongly attributing humanlike consciousness to artificial systems, we’ll make unjustified assumptions about how they might behave. Our minds have not evolved to deal with situations like this. If we feel that a machine consciously cares about us, we might put more trust in it than we should. If we feel a machine truly believes what it says, we might be more inclined to take its views more seriously. If we expect an AI system to behave as a conscious human would—according to its apparent goals, desires, and beliefs—we may catastrophically fail to predict what it might do.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Trouble is on the way whether AI merely seems or actually is conscious.

Our ethical attitudes will become contorted as well. When we feel that something is conscious—and conscious like us—we will come to care about it. We might value its supposed well-being above other actually conscious creatures such as non-human animals. Or perhaps the opposite will happen. We may learn to treat these systems as lacking consciousness, even though we still feel they are conscious. Then we might end up treating them like slaves—inuring ourselves to the perceived suffering of others. Scenarios like these have been best explored in science-fiction series such as Westworld, where things don’t turn out very well for anyone.

In short, trouble is on the way whether emerging AI merely seems conscious, or actually is conscious. We need to think carefully about both possibilities, while being careful not to conflate them. And we need action, too. Along with new institutions like the one that Marcus and Reuel propose, there should be major investment into research into consciousness within the mind and brain sciences, so that we can be better informed when developing and responding to new generations of AI. (This research will also benefit society in many other ways, for example in medicine, law, and animal welfare.)

Accelerated research is also needed in social sciences and the humanities to clarify the implications of machines that merely seem conscious. And AI research should continue, too, both to aid in our attempts to understand biological consciousness and to create socially positive AI. We need to walk the line between benefiting from the many functions that consciousness offers while avoiding the pitfalls. Perhaps future AI systems could be more like oracles, as the AI expert Yoshua Bengio has suggested: systems that help us understand the world and answer our questions as truthfully as possible, without having goals—or selves—of their own.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Like the technology itself, attitudes toward AI are at an inflection point. Time is short to develop a rational and fit-for-purpose framework to ensure that the enormous potential of AI is used for the benefit of humanity and the planet. The implications of either true or apparent artificial consciousness must be part of the conversation. We need to bring to the forefront of our collective awareness the importance of understanding awareness itself.

Anil Seth is a professor of cognitive and computational neuroscience, at the University of Sussex, co-director of the Canadian Institute for Advanced Research Program on Brain, Mind, and Consciousness, and an advanced investigator of the European Research Council. He is the author of Being You: A New Science of Consciousness.

Lead image: Peshkova / Shutterstock

close-icon Enjoy unlimited Nautilus articles, ad-free, for less than $5/month. Join now

! There is not an active subscription associated with that email address.

Join to continue reading.

You’ve read your 2 free articles this month. Access unlimited ad-free stories, including this one, by becoming a Nautilus member.

! There is not an active subscription associated with that email address.

This is your last free article.

Don’t limit your curiosity. Access unlimited ad-free stories like this one, and support independent journalism, by becoming a Nautilus member.