ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. or Join now .
Sign up for the free Nautilus newsletter:
science and culture for people who love beautiful writing.
NL – Article speedbump

Most neuroscientists believe that the brain learns by rewiring itself—by changing the strength of connections between brain cells, or neurons. But experimental results published last year, from a lab at Lund University in Sweden, hint that we need to change our approach. They suggest the brain learns in a way more analogous to that of a computer: It encodes information into molecules inside neurons and reads out that information for use in computational operations.

Nautilus Members enjoy an ad-free experience. Log in or Join now .
Gary Waters/Getty Images

With a computer scientist, Adam King, I co-authored a book, Memory and the Computational Brain: Why Cognitive Science Will Transform Neuroscience. We argued that well-established results in cognitive science and computer science imply that computation in the brain must resemble computation in a computer in just this way. So, of course, I am fascinated by these results.

A computer does not learn by rewiring itself; it learns by encoding facts into sequences of ‘0s’ and ‘1s’ called bit strings, which it stores in addressable registers. Registers are strings of tiny switches. When a switch is set one way, it physically represents ‘1’; when set the other way, it physically represents ‘0’. The registers in a computer’s memory are numbered, and the numbers constitute addresses. The computer stores a bit string by choosing an available address and setting the switches in accord with the string to be stored there.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Most computational operations combine two bit strings to make a new bit string. They are crucial to a computer’s learning process, because the raw inputs it gets often do not contain the facts that the computer needs in useable form. Raw data are transformed into behaviorally useful facts through computational operations. To perform a computational operation, the computer finds two strings in its memory, using its list of addresses, does something with those two strings to generate a new string, which it then stores in a new address. The more a computer has learned, the more of its memory registers contain facts garnered from its experience, facts that it retrieves and manipulates with computational operations. In this way, it fills its memory with computed bit strings that enable it to take effective actions based on what it has learned.

It does not make sense to say that something stores information but cannot store numbers

Most neuroscientists accept that the brain also computes in some sense. However, they think it does so by modifying its synapses, the links between neurons. The idea is that raw sensory inputs, which initially produce incoherent actions, help the brain change its structure in order to produce behavior better suited to the experienced environment. This idea goes back to the empiricist philosophers such as Locke, Hume, and Berkeley. The new connections between neurons correspond to the associations that the empiricist philosophers thought linked together raw sensations to make the mental dust balls that constituted complex concepts. On this view, experience does not implant facts in the brain, which may be retrieved as needed; rather, experience molds the brain so that it responds to further experience more appropriately. That is why the neuroscientific term for learning is “plasticity.” The brain learns because experience molds it, rather than because experience implants facts.

The problem is that experience does implant facts. We all know this, because we retrieve and make use of them throughout the day. For example, most of us can make a mental map of our environment and use it to determine our actions. We may realize that we can pick up a prescription on the way to picking up our children at school because the pharmacy is not far out of the way. Even insects make such maps. The honeybee can use its map to find its way between any two points in its foraging territory, and when a successful forager returns to the hive, it does a dance that tells other foragers where the food source is on their shared map.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

From a computational point of view, directions and distances are just numbers. And numbers, rendered in binary form, are just bit strings. It’s a profound truth of computer science that there is no such thing as information that is not in a deep sense numerical. Claude Shannon’s famous 1948 paper, which founded the field of information theory, used a symphony concert as an example of an information-transmission problem that could be treated numerically. A consequence is that it does not make sense to say that something stores information but cannot store numbers.

Neuroscientists have not come to terms with this truth. I have repeatedly asked roomfuls of my colleagues, first, whether they believe that the brain stores information by changing synaptic connections—they all say, yes—and then how the brain might store a number in an altered pattern of synaptic connections. They are stumped, or refuse to answer.

Learning may involve putting something like bit strings into banks of molecular switches found inside individual neurons—rather than rewiring the neural circuits

Most recently, I asked this of about 20 leading neuroscientists at a workshop at the Massachusetts Institute of Technology. We had just heard a talk on a technique called barcoding, which involves writing numbers into DNA. As King and I explained in our book, DNA molecules in cell nuclei store inherited information in addressable registers—genes—very much like the addressable registers in computer memory. My colleagues had no trouble accepting the possibility of storing data in this way. But when I asked how one could store numbers in synapses, several became angry or diverted the discussion with questions like, “What’s a number?”

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Brains routinely remember the durations of intervals—a piece of simple numerical information if ever there was one. The Swedish research worked with the giant Purkinje cells in the cerebellum that learned the interval between the onset of stimulation to one of their inputs, and a subsequent brief stimulation of another of their inputs. The results strongly implied that the interval-duration memory was stored inside the Purkinje cell, not in its synaptic inputs. Input arriving at the synapses caused the learned information inside the cell to be read out into a nerve signal that we know controls the timing of a simple learned behavior.

Inside neurons are molecules. Many molecules make excellent switches, and storing information in molecular switches is much more energy efficient than doing it in synapses. Learning may involve putting something like bit strings into banks of molecular switches found inside individual neurons—rather than rewiring the neural circuits. That is a profoundly different conception of learning and memory than the one currently entertained.

C.R. Gallistel is a professor of psychology and cognitive neuroscience at Rutgers University.  

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .
close-icon Enjoy unlimited Nautilus articles, ad-free, for less than $5/month. Join now

! There is not an active subscription associated with that email address.

Subscribe to continue reading.

You’ve read your 2 free articles this month. Access unlimited ad-free stories, including this one, by becoming a Nautilus member.

! There is not an active subscription associated with that email address.

This is your last free article.

Don’t limit your curiosity. Access unlimited ad-free stories like this one, and support independent journalism, by becoming a Nautilus member.