Nautilus Members enjoy an ad-free experience. or Join now .

It’s hard to imagine an encryption machine more sophisticated than the human brain. This three-pound blob of tissue holds an estimated 86 billion neurons, cells that rapidly fire electrical pulses in split-second response to whatever stimuli our bodies encounter in the external environment. Each neuron, in turn, has thousands of spindly branches that reach out to nodes, called synapses, which transmit those electrical messages to other cells. Somehow the brain interprets this impossibly noisy code, allowing us to effectively respond to an ever-changing world.

Given the complexity of the neural code, it’s not surprising that some neuroscientists are borrowing tricks from more experienced hackers: cryptographers, the puzzle-obsessed who draw on math, logic, and computer science to make and break secret codes. That’s precisely the approach of two neuroscience labs at the University of Pennsylvania, whose novel use of cryptography has distinguished them among other labs around the world, which are hard at work deciphering how the brain encodes complex behaviors, abstract thinking, conscious awareness, and all of the other things that make us human.

Nautilus Members enjoy an ad-free experience. Log in or Join now .

The Penn scientists have taken their cues from a 73-year-old algorithm that British code-breaker Alan Turing used to read secret German messages during World War II, and a mathematical sequence more famously used to break into digital keypad locks on cars. “Neurons extract information from the world and put it in code,” says Joshua Gold, an associate professor of neuroscience at the University of Pennsylvania. “There’s got to be some kind of code-breaker in the brain to make sense of that.” Employing cryptography in the neuroscience lab, adds Gold, has provided new insights into the “gooshy hardware” that is the brain, exposing its operations as an “information-processing machine.”

Nautilus Members enjoy an ad-free experience. Log in or Join now .

Gold didn’t give much thought to cryptography until the early 2000s, when he was working as a postdoctoral fellow in Michael Shadlen’s monkey lab at the University of Washington. The lab focused on how the brain makes simple perceptual decisions. How does it determine, for example, whether an object is moving to the left or to the right? The fundamental problem in making such decisions is the trade-off between speed and accuracy. The brain needs to take in enough information to make the correct decision, but not so much data that by the time it processes it all, the environment has changed and the decision is moot.

In thinking about the computations the brain might use to solve this problem, Shadlen picked up a book called Good Thinking, written by statistician I.J. Good in 1983. Good was a deputy statistician at Bletchley Park, the estate that served as headquarters for the British government’s code-breaking unit during World War II, which was led by Turing, widely regarded as one of best mathematicians of the modern era. When reading Good’s book, Shadlen was struck by a description of one of Turing’s algorithms—a method for deciphering the supposedly undecipherable messages the Germans created using a machine called Enigma.

It’s not surprising that some neuroscientists are borrowing tricks from more experienced hackers: cryptographers.

The Enigma machine looked like an oversized electric typewriter stuffed into a wooden box, with its keyboard connected to several small rotors tucked inside. Every time a letter key was pressed, the rotors turned slightly, causing the re-mapping of the pressed letter into any of the other letters in the alphabet. After the machine scrambled the message, the author could use Morse code to transmit it via radio waves. When the message’s recipients set their own Enigma machine to the corresponding rotor settings (which they’d have to know in advance), they could type in that scrambled message and—presto—the machine would spit out the decoded version.

Nautilus Members enjoy an ad-free experience. Log in or Join now .

The Germans thought Enigma encryption was impenetrable; they relied on it to communicate all sorts of juicy information about their military strategy. Enigma operators working for the German navy (whose messages were the focus of Turing’s work) changed the starting positions of their rotors every day.

Beginning in 1939, Turing’s team developed a complex, multi-step process for decoding German messages and their contents. It was only the first step in Turing’s process, however, that intrigued Shadlen and Gold, and which they thought could be adapted to brain research. That was the algorithm that Turing used to determine whether any two intercepted messages—Bletchley Park intercepted hundreds of German messages a day—had been written on Enigma machines in the same rotor state.

Turing’s algorithm hinged on a genius bit of logic. He reasoned that if two Enigma machines had been set in different rotor states, then the probability of the first letter in one message being the same as the first letter in the other message would be random—or, more precisely, 1 in 26, for the 26 letters of the alphabet. The same goes for the second letter in each message, the third letter, and so on.

In contrast, if two messages came from machines set in the same rotor state, then their letters would be more likely to match. Why? Because in the German language (just as in English), some letters are used more frequently than others.

Nautilus Members enjoy an ad-free experience. Log in or Join now .

“E,” for example, is the most frequently used letter in German. “That means, if you take any two German texts and just compare them character by character, you know the most likely pair you’re going to find is two E’s,” Gold explains. With Enigma messages, the person trying to break into them wouldn’t necessarily see more matching “E’s,” because all of the letters have been re-mapped to other letters. Still, the probability of getting any matched letter would be higher if the messages originated from machines in the same state.

Turing’s algorithm hinged on a genius bit of logic, which the neuroscientists thought could be adapted to their research.

Working with linguists, who helped him rank letter frequencies of the German language, Turing determined that two encrypted German messages created from machines in the same state would have a letter-matching probability of about 1 in 13, rather than 1 in 26.

So Turing’s algorithm would essentially compare every letter in one message to every letter in the second and tally up the number of matches. If the messages were long enough, then by comparing the letter-matching frequencies he could determine with statistical certainty whether the messages had come from Enigma machines in the same state. The algorithm could also show whether the messages were too short to bother with this comparison, allowing Turing to quickly move on to the next set.

Nautilus Members enjoy an ad-free experience. Log in or Join now .

For messages that were long enough to test, a key feature of Turing’s algorithm was that it summed the evidence as it was being collected—one pair of letters at a time—until enough had accumulated to make a decision about the letter-matching frequencies with a reasonable level of certainty. This method is now known as “statistical sequential analysis.” If the first pair of letters matched, for example, that would be a weak piece of evidence for the hypothesis that the messages came from the same rotor state; after all, that match could have just been due to chance. If, on the other hand, the first 100 pairs of letters included 10 matches, then that would be much stronger evidence. Once the summed probabilities reached a pre-determined level of certainty, Turing’s algorithm could “decide” that the hypothesis—that is, that the two messages came from machines in the same state—was true.

Turing’s methods were hugely successful: Winston Churchill later said that the Enigma decoding was crucial to winning the war. As it turns out, the mathematics behind the algorithm were independently discovered by Abraham Wald, an Austrian-Hungarian Jew who fled to the United States when the Nazis invaded. While Turing was decoding for the British, Wald developed some of the same math tricks to help the U.S. Army determine, for example, whether a cart of munitions were defective or worthy of shipping to the front lines.

Wald’s and Turing’s approach has since influenced many scientific fields, from physics and fluid dynamics to psychology and even economics. “It’s all over the place,” says Roger Ratcliff, distinguished professor of behavioral and social sciences at Ohio State University. Ratcliff has championed these methods for psychological experiments for nearly three decades. He thinks of human behavior as a series of decisions. “Which word to say, whether to go get a cup of coffee or a cup of tea—all of these things are little decisions,” Ratcliff says. “I think this runs through everything we do.”

Nautilus Members enjoy an ad-free experience. Log in or Join now .

Shadlen and Gold were the first to apply the method to neuroscience. “It turns out to be a great insight for how the brain assembles evidence to make decisions,” Shadlen says. Many neurons in the outer layers of the brain are selective, meaning that they fire in response to specific stimuli. Some neurons in the visual cortex, for example, fire when objects in our visual field are moving toward the left, whereas others fire when objects are moving toward the right.

The neurons aren’t perfect, however; sometimes cells selective for rightward motion will fire at leftward motion, and vice-versa. In that way, Shadlen and Gold reasoned, neurons are akin to the letter pairings of two Enigma messages. A single match of letters does not provide enough evidence to say whether the messages originated from machines in the same rotor state. Similarly, any one neuronal signal is not enough for the brain to accurately determine whether an object is moving to the left or to the right. To figure this out, the brain relies instead on the aggregate activity of thousands or even millions of neurons.

In 2002, Gold and Shadlen published a largely theoretical paper suggesting that the brain uses Turing-like computations—or some close approximation of them—to weigh evidence from neuronal firings and make perceptual decisions, such as determining whether a field of dots are moving to the left or right. “Turing’s work represents a form of probabilistic reasoning that the brain appears to have adopted to solve particular problems that are common to both perception and code-breaking,” Gold says.

Any one neuronal signal is not enough for the brain to accurately determine whether an object is moving to the left or to the right. 

Nautilus Members enjoy an ad-free experience. Log in or Join now .

Just as the algorithm accumulated evidence in real time, the brain seems to process neuronal inputs as they’re coming in, and adjust its expectations accordingly. If it receives a big batch of signals from neurons that prefer left motion, for instance, and from few that prefer right motion, then it will take that as strong evidence that the object is moving to the left. Once the inputs have crossed some threshold of certainty (scientists are still trying to understand how the brain determines that threshold), the brain makes its decision and moves on to the next.

There’s another important similarity between the tasks of the brain and the code-breakers: Both face an unavoidable tension between speed and accuracy. For the World War II code-breakers, Gold says, speed and accuracy “become visceral when you think about these people realizing that thousands of lives would be saved if they could just decode these messages today.” Turing designed his algorithm to balance speed and accuracy, optimizing both. That suggested to Gold and Shadlen that the brain itself performs a similar optimizing act. In the brain, says Gold, “we always knew the phenomenon existed in a host of perceptual and cognitive tasks—quick decisions save time but tend to be inaccurate, whereas taking time leads to higher accuracy but can be inefficient.” The model of Turing’s algorithm, Gold says, “makes a nice prediction for how the brain might deal with the speed-accuracy tradeoff.” They’re still testing just how that happens.

Geoffrey Aguirre’s best brain hack began late one night, at his home, while trolling Wikipedia. At his University of Pennsylvania lab, Aguirre, a neuroscientist, uses brain scanners to study how perceptions are shaped by the past. The nervous system is a master of adaptation, constantly tuning itself to particular changes in the environment. When you go into a dark basement, for example, your eyes quickly adjust to the lack of light (a change that can be painfully clear when you climb the stairs and hit the light again).

Nautilus Members enjoy an ad-free experience. Log in or Join now .

The brain’s nimbleness is great for our everyday lives, but it can be a pain when designing brain-imaging studies, in which researchers take pictures of participants’ brains while they experience different stimuli, such as looking at pictures of faces or hearing a series of sounds. Because of so-called “carry-over effects,” the way a participant’s brain responds to a picture of the color blue, for example, is slightly different if the preceding picture was blue or if it was orange.

Teasing apart these carry-over effects is particularly difficult because the brain scanner—a functional magnetic resonance imaging (fMRI) machine—measures blood flow in the brain, a proxy for neural activity. Blood flow changes relatively slowly, on the order of seconds, whereas neurons fire over milliseconds. So fMRI’s sluggishness often masks neural carry-over, which can be annoying both for scientists who want to correct for carry-over effects and for others, like Aguirre, who want to scrutinize them.

There’s another important similarity between the tasks of the brain and the code-breakers: both face an unavoidable tension between speed and accuracy.

In 2007, Aguirre published a paper showing that carry-over effects can be managed by placing the experimental stimuli (pictures of different colors) in a particular order. The idea is to arrange the stimuli so that every picture appears both before and after every other picture at some point during the experiment. Then, when analyzing the data, researchers can compare the brain responses for different before-and-after combinations (blue followed by blue, versus orange followed by blue) and easily spot any carry-over effects.

Nautilus Members enjoy an ad-free experience. Log in or Join now .

That order of stimuli—in which every picture is “counter-balanced” with the previous one, without any repeat combinations—is called, in the math world, a “type 1, index 1 sequence.” It works well for brain scanning because of its efficiency: It gives the shortest possible counter-balanced sequence, minimizing the participant’s time lying in an uncomfortable scanner.

But for Aguirre, that sequence also had limitations. For example, the algorithm to create a type 1 sequence only works for six or more stimuli. What’s more, it only accounts for the carry-over effects from one previous picture. But what if you want to study the effects of the previous two pictures, or three? This would be important because research has shown that our perceptions can be influenced not only by the last thing we saw, but the last several things. This phenomenon happens at longer time scales, too. The classic example, Aguirre says, relates to facial recognition. “If somebody moves from Chicago to Tokyo, people will describe that for a certain number of months, they find it difficult to tell faces apart,” Aguirre says. “But over time you become familiar with this new range of facial appearances and then you get good at telling faces apart.”

Those problems with type 1 sequences drove Aguirre to “long nights spent poking through Wikipedia,” he says, laughing. He read page after page on the principles of discrete mathematics and graph theory. One night, he stumbled on the page for de Bruijn sequences, a large category that includes the type 1 sequences Aguirre was already familiar with. “De Bruijn sequences are this whole world of sequences that have a special property of counter-balance,” Aguirre says. “I realized that they would be perfect for the kinds of applications we had.”

To understand how de Bruijn sequences work, think of a string of letters, such as Aguirre’s initials, GKA. The de Bruijn sequence arranges those three letters in a long sequence so that every possible three-letter combination is used once and only once. Aguirre made a logo for his lab’s webpage to illustrate what one of these sequences looks like:

Nautilus Members enjoy an ad-free experience. Log in or Join now .

So why is a de Bruijn sequence so useful? Efficiency. If you were to write out every possible three-letter combination separately, your list might begin with:

GGG
GGA
GGK
GAA
GAG
GAK

… and so on. You’d have to type 81 letters before exhausting every combination. But as the lab logo shows, the de Bruijn sequence allows letters from one triplet to bleed into the next, allowing you to reach all possible combinations by typing just 27 letters.

Nautilus Members enjoy an ad-free experience. Log in or Join now .

Thieves can use these sequences to break electronic keypads such as those found on car doors. These generally have buttons representing the numbers 0 through 9, and pressing the right four buttons in a row unlocks the car. “If you know a de Bruijn sequence, you can cut the amount of time it would take you to crack that code substantially,” Aguirre says.

Aguirre immediately saw how these sequences would be useful for brain-imaging experiments. For his purposes, the stimuli of an fMRI experiment (such as pictures of colors) are like the numbers in the car lock. The de Bruijn sequence would give him a way to put them in the right order. Just as a sequence could be assembled for any length of keycode—a three-digit code, say, or four digits, or five—a sequence could be made for any level of counter-balancing of stimuli in the imaging experiment.

The only tricky part is, a given series of stimuli has more than one de Bruijn sequence—many more. For example, if you wanted to design an experiment using 17 different pictures, with each picture counter-balanced so that researchers could later work out the carry-over effects from the previous picture, then you’d have a mind-boggling 8 × 10244 different de Bruijn sequences to choose from.

In 2011, Aguirre and graduate student Marcelo Gomes Mattar published a method to help researchers select from that massive pool the sequences that make the most sense for a particular experiment.

Nautilus Members enjoy an ad-free experience. Log in or Join now .

A researcher might have a hypothesis about how the brain responds to seeing a particular string of stimuli—such as red then orange then yellow, to use a fictitious example. Using Aguirre’s methods, the researcher could choose a de Bruijn sequence that not only counter-balanced the stimuli (so that, by the end of the experiment, every picture has appeared both before and after every other picture), but also included that red-orange-yellow sequence that was key to the hypothesis. If the brain then responded in a robust way to the target sequence, the hypothesis would be proven true.

“The basic idea is that, only if your theory is right, and the person you’re studying has a neural code that corresponds to your theory, only then will the neural system ring in a way that you can measure with the neuroimaging technique,” Aguirre says. It’s even possible to test more than one theory in the same sequence, he adds. “If that neural bell rings out at 10 seconds, then maybe I’d know my first theory was right, but if the bell rings out at 15 seconds, I’d know it was the other theory.”

Thieves can use these sequences to break electronic keypads such as those found on car doors. 

Aguirre and another of his graduate students, David Kahn, are now using this method to test an intriguing idea about how people with autism see the world. The theory is that individuals with this developmental disorder have more sensitive and discriminating visual perception. This hypersensitivity would allow them to pick up on tiny differences in faces and scenes, but it might also make it difficult for them to adjust to a changing environment. If the theory is true, then a person with autism would show a different brain response to a series of similar-looking faces than a person without autism. And because the researchers want to look specifically at carry-over effects from several consecutive images, the study is perfectly suited for de Bruijn sequences. If the experiment works, then the researchers will have a better understanding of the neural underpinnings of some people with autism, and with it, potentially, a new lead on finding treatments for the bafflingly complex disorder.

Nautilus Members enjoy an ad-free experience. Log in or Join now .

Aguirre has made his software freely available through his website, and other laboratories have begun to try it out. Sean MacEvoy, a neuroscientist at Boston College, has used de Bruijn sequences to investigate how the brain’s visual cortex represents an object’s identity and its position in space.

MacEvoy’s experiment had 18 different stimuli, and he says the de Bruijn sequence worked like a charm. He plans to use it in future experiments. “With the sequence, you know you don’t have any spurious biases that can cloud the interpretation of your data,” he says. “I don’t see a downside.”

For Aguirre, cryptographic techniques will continue to help neuroscientists penetrate complex functions of the brain. Just as cryptographers have perfected ways to transform a seemingly arbitrary message into a “hash code,” a sequence that can only be interpreted in one valid way, neuroscientists have learned the brain makes hash codes out of seemingly random neural firings. “Our key insight is that, effectively, the brain implements a temporal hash function on the world,” Aguirre says. Providing a way to crack that world-ordering code, he adds, “is the power of this intellectual approach.”

Nautilus Members enjoy an ad-free experience. Log in or Join now .

Virginia Hughes is a science journalist specializing in neuroscience, genetics, and medicine. 

close-icon Enjoy unlimited Nautilus articles, ad-free, for as little as $4.92/month. Join now

! There is not an active subscription associated with that email address.

Join to continue reading.

Access unlimited ad-free articles, including this one, by becoming a Nautilus member. Enjoy bonus content, exclusive products and events, and more — all while supporting independent journalism.