One question for Raphaël Millière, a Presidential Scholar in society and neuroscience in the Center for Science and Society at Columbia University, where he conducts research on the philosophy of cognitive science.

Photo courtesy of Columbia University Center of Science and Society

Does an AI’s ability to talk mean it’s conscious?

Simply talking to a large transformer model like LaMDA and looking at its answers—like the Google engineer Blake Lemoine did—isn’t the right kind of target, or potential evidence, for consciousness. The capacity to talk at all, or to talk about consciousness in particular, is neither a sufficient nor necessary condition for being conscious. A lot of animal species are capable of having conscious experiences—your pets, apes, or cephalopods. They react to noxious stimuli in ways that seem absolutely consistent with having consciousness. Animals also share a lot of neurobiological structures with us. Yet they cannot speak about their feelings. It’s also not sufficient precisely because of what happens with cases like LaMDA, where the model is very good at mimicking, giving the illusion of speaking about sentience, or about their feelings. But they don’t really have feelings.

We ought to consider a broader cluster of evidence that includes looking at the kind of structures that biological and artificial systems have, and seeing whether these systems can sustain the kind of computation, or computational complexity, that we find in humans associated with conscious experiences or sentience. In the brain you have all sorts of recurrent connections, feedback loops, all sorts of specialized circuits and subnetworks, and modules that you don’t find in these large transformer models that are conceptually very simple. You could describe fairly easily the basic building blocks of these models. On the other hand reverse engineering the architecture of the human brain is extremely complex. 

Children learn in a very different way from what current transformer based algorithms learn. The way current algorithms learn is completely passive. They are just fed a torrential stream of data, bombarded with text data and images. They don’t roam the world and causally interact with the world in the way humans do. My sense is that causal interaction with the world is crucial for the way in which children learn to develop the right kinds of representations about the world across different senses—not just vision but also tactile, auditory, and other modalities. This is how they learn to represent the causal structure of the world, not just correlations between events. Children can also learn from relatively scarce data, whereas current models need a huge amount of data, orders of magnitude more data than children, to even have the slightest chance at doing as well in solving tasks.

We are still only scratching the surface in neuroscience. It’s complicated because we don’t yet have a fully worked out scientific theory of consciousness. There are various competing theories, and the science of consciousness is still young. The best we can do right now is look at what kind of predictions our best theories make, both in terms of behavior and information-processing complexity. That, all taken together, would be a better guide to start making some speculations about sentience in both biological and artificial systems than just looking at linguistic outputs.

Lead image: Studiostoks / Shutterstock