Nautilus Members enjoy an ad-free experience. or Join now .

Irvin Yalom, an emeritus professor of psychiatry at Stanford University, dreamt about peering into minds. “A series of distorting prisms block the knowing of the other,” he wrote in Love’s Executioner: And Other Tales of Psychotherapy, in 2012. “Perhaps in some millennium, such union will come to pass—the ultimate antidote for isolation, the ultimate scourge of privacy.” If Kai Miller, a neuroscientist and neurosurgeon at Stanford Medicine, has his way, that day may come sooner rather than later.

With colleagues at the University of Washington, he published an article earlier this year in PLOS Computational Biology, describing a technique—a “template projection approach”—that, they say, “decodes perception.” It predicts what a person is looking at by analyzing their brain activity. An important feature of the technique, says Miller, is that “it provides a robust, continuous, measure that is a summary statistic for how well the brain state at every point in time reflects the expected response” from a particular stimulus, like a picture of a face.

Nautilus Members enjoy an ad-free experience. Log in or Join now .

What makes their approach new is that, in previous attempts to decode perception, researchers have had to pre-specify the time of a subject’s stimulus. “Real-world perception rarely occurs at pre-defined times,” says Miller, “and approaches to decoding perceptual experience should be extracted spontaneously from continuous cortical recordings. We have developed a technique to do just this.”

Miller wanted, in particular, a better understanding of the relationship between two types of brain activity during human perception: The first, broadband spectral changes, are distributed, widespread power shifts of electrical activity across multiple brain areas; the second, event related potentials (ERPs), are the initial neuronal responses (the changes in the electrical “potential” of a neuron) to a stimulus (the “event”). Uncovering connections between them is an important step in learning how humans process visual information. Using just one type of brain activity failed to predict what a person is seeing, says Miller. “The real breakthrough of the research is showing that the two streams carry complementary information that can be used to identify the percept.”

Nautilus Members enjoy an ad-free experience. Log in or Join now .

To measure these types of brain activity, Miller and his colleagues needed to use electrodes—electrocorticographic arrays (ECoG)—placed directly on the brain surface; specifically on the areas of the brain dedicated to processing sensory information, the temporal lobes, located just above each ear. Since ethics forbids opening people’s heads up for curiosity’s sake, Kai and his colleagues found subjects who already had electrodes in place: seven epileptic patients going through treatment.

During the experiment, the subjects were each shown a random sequence of grayscale images, either faces or houses. Each image was visible for 400 milliseconds; between each image, they saw a blank screen for another 400 milliseconds. During this process, computer software recorded and correlated the broadband changes and event related potentials with the perceived houses and faces. Afterwards, this allowed the researchers to tell whether a subject was looking at a house or a face with 96 percent accuracy, and within 20 milliseconds of the subjects’ seeing it.

Miller believes this breakthrough will eventually lead to new treatments for brain-injured patients. “My hope is that we will be able to implant devices that will record from one brain area, decode the content, and stimulate another brain area in order to regrow connections. I want to start doing that in 5 years.”

It’s not a stretch to imagine, for instance, training the algorithm to recognize the relationship between the brain’s language region and the motor region controlling the tongue and other speech muscles. It could perhaps allow a paralyzed patient, unable to speak or move, to communicate by imagining speaking instead. The resulting brain activity could be the input to a device that translates that data and emits it as speech. “With appropriate [brain] recordings you could learn extremely fine details of a person,” Gerwin Schalk, one of Miller’s colleagues, told Motherboard. “Like what they are literally thinking.” In a March case report, he and colleagues showed that, using the same brain recording set-up as Miller, one could tell non-speaking, brain-damaged patients stories in order to map, within a few minutes, where their language areas are that activate in response to narratives; this information could prove critical to surgeons trying to avoid damaging language functions.

Nautilus Members enjoy an ad-free experience. Log in or Join now .

Is being able to predict whether a person is seeing a house or a face really mind reading? Just barely, says Miller. The brain is incredibly complicated, he says, and messier than we realize. “Understanding how different information from different brain areas is coordinated and stitched together is several orders from where we are now.” But real-time perception decoding, he says, is a strong start.

Kevin Blake Ferguson is a magician and a writer based in San Francisco. Follow him on Twitter @kevinblakemagic.

Nautilus Members enjoy an ad-free experience. Log in or Join now .
close-icon Enjoy unlimited Nautilus articles, ad-free, for as little as $4.92/month. Join now

! There is not an active subscription associated with that email address.

Join to continue reading.

Access unlimited ad-free articles, including this one, by becoming a Nautilus member. Enjoy bonus content, exclusive products and events, and more — all while supporting independent journalism.

! There is not an active subscription associated with that email address.

This is your last free article.

Don’t limit your curiosity. Access unlimited ad-free stories like this one, and support independent journalism, by becoming a Nautilus member.