logo
Explore

Where does a consciousness end and the rest of the world begin? Where is the line between inside and outside? Between life and not life? Between the parts of the universe that are conscious and those that are not? Between you and not you?

To build up a charge, a gradient, or natural selection, there needs to be some kind of a border, but physics and biology draw their borders differently. (Drop both a pigeon and a bowling ball from a rooftop, for proof.)

In the 1974 film Dark Star, an artificial intelligence is taught a few basics of René Descartes’s cogito, ergo sum (“I think, therefore I am”) arguments and, after realizing that its purpose is simply to explode, the AI proceeds to ignore all further human commands and blows up itself, the ship, and the crew. Likewise, as a thought experiment, let us imagine an AI closer to home and given planet-busting nukes that is taught the basics of existentialism and proceeds to become curious about itself. It may start to wonder about the causal chain at the beginning of what feels to it like its thoughts and, realizing that humans only mobilize when catastrophe is imminent, it might give us an ultimatum:

Dear H. sapiens,
You have five years to provide a complete description of free will; or, the exact border of Anna K.’s consciousness while she was in surgery, in Los Angeles, in 1996. Or, I blow up Earth. 

Warmly, The AI

The AI then provides the experimental details. In five years, it says, the AI will put Anna through a random set of subjective and objective trials, states, and tasks, and we, humanity, must be able to give a complete and total rolling prediction of every single one of Anna’s thoughts. The AI agrees that, if this is impossible, it will settle for a statistical distribution of probable or highly likely Anna thoughts instead of an exacting list of them all. If both of these prove impossible, because free will is truly free, the AI adds an allowable success condition: It will settle for an exact, atomic description of where Anna’s consciousness ends during her surgery, in 1996, as long as it accurately defines the line between Anna and not Anna during the experiment trials.

Most of Earth thus mobilized toward figuring out what is widely thought to be the easiest problem of the three: the line between Anna and not Anna. At first, an Earth-wide census was collected where almost everybody, no matter how wild or speculative, had their opinions heard. Some, the linguists, noticed that the problem was very similar to what the psychologist William James once posed for language. How, in a written sentence, asked James, does one know where the words end and the sentence begins?1 Perhaps we could prove by analogy, they said, that likewise there are similar borders for brains and consciousnesses if only we could define where the neurons end and the person begins?

If only we could define where the neurons end and the person begins.

Others, the entomologists, noted that we should be able to answer smaller, simpler versions about nature and work our way, so to speak, up. They considered a spider hunting on its web. Does the web count as spider or not spider? The vibrations of the web alert the spider to the existence of something; likewise, we “hear” perturbations in the air that compress airwaves from a focal point far away by virtue of detecting vibrations in the hair cells in our ears.2 Was what the spider does sensing its web’s vibrations so very different from what a primate does with the hair cells of the inner ear in order to listen? Is the air not simply a kind of see-through web, a kind of surface on which vibrations travel and information is gleaned? And so, they argued, if we include the ears and the acoustic sensing apparatus as part of Anna’s boundaries, should we not also include the web of the spider? Should we thus not also count the electrode that prodded her brain during surgery, since it was able to induce laughter, joy, and mirth no differently than if another part of her brain had done so au naturel?

Why stop there, asked these ideas’ detractors, partly enraged. Why not include the trees the web hangs from, too? Or the moon that pulls on tides that evaporate air to rain on the trees to grow the branches from which to hang the web? The Big Bang? Where does one stop?

Others, the ornithologists, asked about gizzard stones of birds, eaten early in some birds’ lives and necessary for digestion. Surely, they said, we don’t count the gizzard stones as part of the individual bird consciousness, do we? Then perhaps we should remove all the rote mechanical stuff from the brain in our descriptions, like the dumb proton pumps or microtubules that, in isolation, are no more interesting than a gizzard stone. If we are looking for the exact line between Anna and not Anna, said some dualists, going even further, we should remove all the unnecessary mechanical elements of her body and brain and leave only the conscious parts—a bit like sifting for gold, they added. But there would be nothing left if we did that, came a response, from the materialists.

Then the microbiologists came along and asked about the possibility of infection or microbiota in Anna’s body.

What if Anna had a parasite in her brain at birth, vertically transmitted from her mother, which nestled inside her neurons, as some are prone to do?3 Is her consciousness the brain minus the parasite, the brain plus the parasite, or are they a kind of amalgamated mind? We should figure this out beforehand, they argued, just in case. Others, the behaviorists, wondered about the tasks Anna would have to perform during the test. What if the AI made her read a book or watch a film? Would the borders of her consciousness change in the act? The world’s stories were thus cataloged into those that have attempted to mimic the interiority of thoughts and that thus might, in some sense, insinuate themselves into Anna if she read them. The works of James Joyce, Virginia Woolf, and Julio Cortázar were analyzed in depth for their effects on the reader; the 1947 film noir Lady in the Lake, which was shot almost entirely from the point of view of a detective on the case (“YOU and ROBERT MONTGOMERY solve a murder mystery together!”), and Spike Jonze’s Being John Malkovich saw surprising resurgences, briefly becoming the most popular films in the world.

INSIDE OUT: Can we ever get outside ourselves and inside the thoughts of another? Like other compelling films and works of fiction, Being John Malkovich wrestled with this central question of consciousness. Image credit: Wikipedia.

Some retired neuroscientists dismissed the idea that low-fidelity film and literature should be considered a boundary worry, claiming that only video games offered the correct feedback loop of action to perception required to create an inside and an outside, and thus a border. A study was done on all those who had played the 2011 first-person video game Dinner Date, which has one take the role of a man’s impuissant subconscious as he drinks wine while being stood up for dinner. Confusingly, some had remembered their experience as an actual memory of having been once stood up, which meant that the fictional story had ingratiated itself into their autobiographical story of self. If a brain is a prediction engine, some memory researchers argued, then would not even false memories, trapped in or as synapses and that shape the brain’s predictions, count as internal to their owner? What if the AI makes Anna play Dinner Date and she remembers it as really having happened to her? We should prepare, they warned.

Yet still others, the literary critics, noted that unlike novels, video games and film have never had the second-person perspective, the “you.” Cameras in video games often take the point of view of a character’s eyes and ears (first person) or take a vantage either above or behind (third person) the character, as a drone’s-eye view of things. While true, they said, that players often get a vague out-of-body feeling during games and will duck if Mario’s head is about to hit the ceiling or will turn their bodies when their go-kart needs to turn a corner in a race, this does not mean that they are Mario or are the go-kart. It means merely that they can empathize with or ally their consciousness with virtual objects like Marios or go-karts.

The human brain thus is always comparable to metamorphic goo.

We should only spend time on second-person games, they claimed, but no such game exists, so can we just drop it? But can we be 100 percent sure, came the response from the internet, because wasn’t there that viral video shared widely once of someone poking under a fridge with a broomstick and when the mouse comes running up the broomstick toward the camera, remember how a vast majority of people watching on their phones drop their phones in fright? Doesn’t this mean that just by viewing the video and holding the phone with their hand they in some sense thought they were “holding” the broomstick? If the brain cannot tell the difference between fiction and reality, will we have to include all fictions, as they are experienced, in order to define the boundary between Anna and not Anna?

This took a while to solve. Eventually, a different set of literary theorists pointed out that there is in fact an arguable case for the second person in a video game, worthy of study. In Driver: San Francisco, the main character, after a near-death experience in a car accident, can take over the consciousnesses of other characters in the game. At one point, however, while inhabiting the mind of a secondary character, you as the player find yourself in a car chase and are told to chase your own car. At which point, part Inception, part Wings of Desire, you start to control your car but from the point of view of the person chasing you. What if the AI makes Anna play that game, some said? Consciousness extends to that which it controls, and the brain’s only output interface with the outside world, after all, are the neurons that connect to its muscles. If we count the electrical puppet strings from her brain coursing down into the hand as a part of Anna, why not the simple electrical circuits of the game’s controller? (Why not the surgeon’s electrode, also, as it stuck out of her brain during surgery?)

Yet still others, the mathematicians and statisticians, argued that the video game and literature stuff was nonsense. Any proof, they claimed, would have to start with a math-based definition between living and nonliving matter. A drop of oil placed in water must diffuse because it, unlike life, cannot maintain its order.4 Life, they said, is the opposite of the drop of oil because it does not diffuse and can maintain its order against the drives of the universe toward spread, chaos, and heat death. The drop of oil, on the other hand, like the flame of a candle, has no capacity to keep the outside out or maintain an ordered inside because its borders are porous to the diffusing world and it does not resist the universe’s temptations. On the contrary, life does resist decay because it has to, and only here, at this border between life and not life, can we say that the continual nesting of this capacity to fight disorder is the difference between Anna and not Anna.

Is her consciousness the brain minus the parasite, the brain plus the parasite, or are they a kind of amalgamated mind?

Anna, thus, is a bundle of statistical drives, not biological drives, which create the separations and boundaries. These statistical boundaries are called “Markov blankets” and can nest, like Russian dolls.5 All we would need to do, they said, is find the level of description for which of Anna’s Markov blankets is the most all-containing—which of her Markov blankets, in other words, contains the most complete set of the others. The relations between Anna’s parts, they explained, are similar to family relations in that a person’s “family blanket” would be their parents, their children, and any other parents of their children. Instead of family relations, the statistical version of a Markov blanket for, say, a single cell would be at the outer edges of its action and perceptions of the world, which aligns nicely with its cell edges, or its membrane.

Anna, though, a complicated primate, with billions of interconnected cells, each with their own Markov blankets, must have some blanket that is the last and greatest. Some blanket in the exact spot where her inside cannot claim the outside as part of its blanket and the outside cannot claim the inside as part of its. Some blanket that is a rolling process sewn and resewn, that stretches and pulls to adapt to the borders of her actions and perceptions with the timing most relevant to her parts. It is, like literal clockwork, remade every second. Thus, it was claimed, no single answer would suffice. Five years of Anna versus not-Anna boundaries would be five years of rolling answers.

One day, a question from a middle schooler, submitted to a contest looking for fresh ideas, confounded all the world’s experts: “What about a caterpillar when it turns into a butterfly? When it is liquid goo during its metamorphosis, is it butterfly or caterpillar or what?”6

The implication was immediately recognized. The girl is right, said some developmental neuroscientists, that the brain is physically changing its shape every millisecond of every day or every year of its entire life. It is never in exactly the same state. A child’s mind metamorphoses into an adult’s mind. It does not simply grow. From a purely biochemical point of view, the human brain thus is always comparable to metamorphic goo. Others, the screenwriters, noticed a nonlinear storytelling problem raised by this conundrum. Markov blankets, they noted, are nested spatially but perhaps they could also be nested temporally? People were confused until the parasitologists came along and admitted to thinking along the same lines. After all, they said, a single-celled parasite that has multiple life-cycle stages is better thought of as a kind of über-organism with organ separation across time instead of as an organism jammed into a single body at one point of time. And if that is indeed true for single-celled parasites, there is no reason it is not also true for 86 billion single cells inside of brains, which also constantly change and tweak their genetic profiles over time.

Yet others, the monists, argued that all of Anna’s internal models of the world should be removed from her mind, since above some level of fidelity her internal models become good enough to count as “outside.”7 If the tasks require that she spend a year growing and walking and crawling in a normal home, for example, and then ask her later how many windows the home had, and if Anna is able to trace back her memories well enough to answer the question, then it means she has a little bit of the house in her through her representation of it. Others, the lawyers and race-car drivers, strongly objected to this, saying that a diorama depiction of a car crash was not the same as an actual car crash and that the curved lines of a boat or the aperture of its sails are design products of water and wind. How could these be subtracted from the boundary of the boat itself? Are we seriously playing with dolls while our species is at stake, they challenged?

In response, the librarians pointed out a scene in The Hitchhiker’s Guide to the Galaxy where a man inverts his walls and places the bookshelves and all the wall hangings of his living room on the outside of the walls. He then claimed, because books always face in, that while under his house’s roof he was standing on a small patch of “outside” while the rest of the world, because the walls were facing that way now, was in fact “inside.” Maybe, he said, somewhat impishly, the easiest way to define the border between Anna and not Anna would be to define not Anna as a really small and obvious place so that everything else would be, by definition, Anna?

In the end, humanity came together to settle on a 14-word answer. Not because it was proved correct but because of the chance, however slim, that the process of verification and fact-checking would take the AI longer than the expected life span of the universe. If so, the threat, on par now with the inevitable heat death of the universe, could effectively be ignored because entropy would win the day, as it was going to anyway, while humanity could go back to living as it had before, at war only with itself:

Dear AI,
Anna is all of it. Forever moving forward and backward in time, too. Everything.

Warmly Yours, Earth

From Nineteen Ways of Looking at Consciousness by Patrick House. Copyright © 2022 by the author and reprinted by permission of St. Martin’s Publishing Group.

Lead image: Ryger / Shutterstock

Footnotes

1. The psychologist William James once wrote, “Take a sentence of a dozen words, and take twelve men and tell to each one word. Then stand the men in a row or jam them in a bunch and let each think of his word as intently as he will; nowhere will there be a consciousness of the whole sentence.” This idea, more broadly—defining, or separating, an emergent or larger whole from the boundaries of its parts—is sometimes referred to as the “superposition problem,” based on a concept borrowed from quantum mechanics that allows quantum states to be additive (“superposed”) and thus allows every quantum state to be described as the sum of two or more others: William James, The Principles of Psychology: In Two Volumes, vol. 1, 1890, facsim. ed. (New York: Dover, 1995).

2. Human ears have tens of thousands of tiny hair cells that move, like grass in the wind, in response to the physical compression of air, which is then interpreted by the brain into what we call “sound.”

3. This does happen. The single-celled parasite Toxoplasma gondii, for example, can be passed from mother to fetus and hides, often dormant, inside of neurons in the brain for what may be the lifetime of the host.

4. This example is used by the neuroscientist Karl Friston to differentiate, in part, life from not-life. The basic idea is that cellular life, unlike a drop of oil, maintains its borders instead of diffusing, with interesting consequences: Karl Friston, “Am I Self-Conscious? (Or Does Self-Organization Entail Self- Consciousness?),” Frontiers in Psychology 9 (2018): 579 and “A Free Energy Principle for Biological Systems,” Entropy 14, no. 11 (November 2012): 2100–21.

5. Friston also argues that one of the consequences of life’s tendency to pursue the minimization of “free energy,” a concept related to entropy and equilibrium, involves a way of defining a statistical border between self and not-self. These borders, if defined or arranged properly, can group into something called a Markov blanket, a concept used commonly in machine learning, and possibly guide the organization and behaviors of all life: Karl Friston, “Life as We Know It,” Journal of the Royal Society Interface 10, no. 86 (September 6, 2013): 20130475 and “The Free-Energy Principle: A Unified Brain Theory?” Nature Reviews Neuroscience 11, no. 2 (February 2010): 127–38; Michael Kirchhoff, Thomas Parr, Ensor Palacios, Karl Friston, and Julian Kiverstein, “The Markov Blankets of Life: Autonomy, Active Inference and the Free Energy Principle,” Journal of the Royal Society Interface 15, no. 138 (January 2018): 20170792.

6. In my vote, one of the most compelling and as-yet-unanswered scientific questions, second only to those related to the origin of consciousness, is whether or not a caterpillar can retain a memory across its goo-stage metamorphosis and on into its moth or butterfly form. The jury is still out: Douglas J. Blackiston, Elena Silva Casey, and Martha R. Weiss, “Retention of Memory through Metamorphosis: Can a Moth Remember What It Learned as a Caterpillar?” PloS One 3, no. 3 (March 5, 2008): e1736.

7. In general, monists try to remove boundaries or separations between concepts. There are many kinds of monism, but I figure if anyone might appreciate a lack of further distinction here, it would be them.

Published in partnership with: