Nautilus Members enjoy an ad-free experience. or Join now .

Why Is the Human Brain So Efficient?

How massive parallelism lifts the brain’s performance above that of AI.

Luo_HERO-F

The brain is complex; in humans it consists of about 100 billion neurons, making on the order of 100 trillion connections. It is often compared with another complex system that has enormous problem-solving power: the digital computer. Both the brain and the computer contain a large number of elementary units—neurons and transistors, respectively—that are wired into complex circuits to process information conveyed by electrical signals. At a global level, the architectures of the brain and the computer resemble each other, consisting of largely separate circuits for input, output, central processing, and memory.1

Which has more problem-solving power—the brain or the computer? Given the rapid advances in computer technology in the past decades, you might think that the computer has the edge. Indeed, computers have been built and programmed to defeat human masters in complex games, such as chess in the 1990s and recently Go, as well as encyclopedic knowledge contests, such as the TV show Jeopardy! As of this writing, however, humans triumph over computers in numerous real-world tasks—ranging from identifying a bicycle or a particular pedestrian on a crowded city street to reaching for a cup of tea and moving it smoothly to one’s lips—let alone conceptualization and creativity.

Nautilus Members enjoy an ad-free experience. Log in or Join now .

So why is the computer good at certain tasks whereas the brain is better at others? Comparing the computer and the brain has been instructive to both computer engineers and neuroscientists. This comparison started at the dawn of the modern computer era, in a small but profound book entitled The Computer and the Brain, by John von Neumann, a polymath who in the 1940s pioneered the design of a computer architecture that is still the basis of most modern computers today.2 Let’s look at some of these comparisons in numbers (Table 1).

Nautilus Members enjoy an ad-free experience. Log in or Join now .

The computer has huge advantages over the brain in the speed of basic operations.3 Personal computers nowadays can perform elementary arithmetic operations, such as addition, at a speed of 10 billion operations per second. We can estimate the speed of elementary operations in the brain by the elementary processes through which neurons transmit information and communicate with each other. For example, neurons “fire” action potentials—spikes of electrical signals initiated near the neuronal cell bodies and transmitted down their long extensions called axons, which link with their downstream partner neurons. Information is encoded in the frequency and timing of these spikes. The highest frequency of neuronal firing is about 1,000 spikes per second. As another example, neurons transmit information to their partner neurons mostly by releasing chemical neurotransmitters at specialized structures at axon terminals called synapses, and their partner neurons convert the binding of neurotransmitters back to electrical signals in a process called synaptic transmission. The fastest synaptic transmission takes about 1 millisecond. Thus both in terms of spikes and synaptic transmission, the brain can perform at most about a thousand basic operations per second, or 10 million times slower than the computer.4

The computer also has huge advantages over the brain in the precision of basic operations. The computer can represent quantities (numbers) with any desired precision according to the bits (binary digits, or 0s and 1s) assigned to each number. For instance, a 32-bit number has a precision of 1 in 232 or 4.2 billion. Empirical evidence suggests that most quantities in the nervous system (for instance, the firing frequency of neurons, which is often used to represent the intensity of stimuli) have variability of a few percent due to biological noise, or a precision of 1 in 100 at best, which is millionsfold worse than a computer.5

A pro tennis player can follow the trajectory of a ball served at a speed up to 160 mph.

The calculations performed by the brain, however, are neither slow nor imprecise. For example, a professional tennis player can follow the trajectory of a tennis ball after it is served at a speed as high as 160 miles per hour, move to the optimal spot on the court, position his or her arm, and swing the racket to return the ball in the opponent’s court, all within a few hundred milliseconds. Moreover, the brain can accomplish all these tasks (with the help of the body it controls) with power consumption about tenfold less than a personal computer. How does the brain achieve that? An important difference between the computer and the brain is the mode by which information is processed within each system. Computer tasks are performed largely in serial steps. This can be seen by the way engineers program computers by creating a sequential flow of instructions. For this sequential cascade of operations, high precision is necessary at each step, as errors accumulate and amplify in successive steps. The brain also uses serial steps for information processing. In the tennis return example, information flows from the eye to the brain and then to the spinal cord to control muscle contraction in the legs, trunk, arms, and wrist.

Nautilus Members enjoy an ad-free experience. Log in or Join now .

But the brain also employs massively parallel processing, taking advantage of the large number of neurons and large number of connections each neuron makes. For instance, the moving tennis ball activates many cells in the retina called photoreceptors, whose job is to convert light into electrical signals. These signals are then transmitted to many different kinds of neurons in the retina in parallel. By the time signals originating in the photoreceptor cells have passed through two to three synaptic connections in the retina, information regarding the location, direction, and speed of the ball has been extracted by parallel neuronal circuits and is transmitted in parallel to the brain. Likewise, the motor cortex (part of the cerebral cortex that is responsible for volitional motor control) sends commands in parallel to control muscle contraction in the legs, the trunk, the arms, and the wrist, such that the body and the arms are simultaneously well positioned to receiving the incoming ball.

This massively parallel strategy is possible because each neuron collects inputs from and sends output to many other neurons—on the order of 1,000 on average for both input and output for a mammalian neuron. (By contrast, each transistor has only three nodes for input and output all together.) Information from a single neuron can be delivered to many parallel downstream pathways. At the same time, many neurons that process the same information can pool their inputs to the same downstream neuron. This latter property is particularly useful for enhancing the precision of information processing. For example, information represented by an individual neuron may be noisy (say, with a precision of 1 in 100). By taking the average of input from 100 neurons carrying the same information, the common downstream partner neuron can represent the information with much higher precision (about 1 in 1,000 in this case).6

The computer and the brain also have similarities and differences in the signaling mode of their elementary units. The transistor employs digital signaling, which uses discrete values (0s and 1s) to represent information. The spike in neuronal axons is also a digital signal since the neuron either fires or does not fire a spike at any given time, and when it fires, all spikes are approximately the same size and shape; this property contributes to reliable long-distance spike propagation. However, neurons also utilize analog signaling, which uses continuous values to represent information. Some neurons (like most neurons in our retina) are nonspiking, and their output is transmitted by graded electrical signals (which, unlike spikes, can vary continuously in size) that can transmit more information than can spikes. The receiving end of neurons (reception typically occurs in the dendrites) also uses analog signaling to integrate up to thousands of inputs, enabling the dendrites to perform complex computations.7

Nautilus Members enjoy an ad-free experience. Log in or Join now .

Your brain is 10 million times slower than a computer.

Another salient property of the brain, which is clearly at play in the return of service example from tennis, is that the connection strengths between neurons can be modified in response to activity and experience—a process that is widely believed by neuroscientists to be the basis for learning and memory. Repetitive training enables the neuronal circuits to become better configured for the tasks being performed, resulting in greatly improved speed and precision.

Over the past decades, engineers have taken inspiration from the brain to improve computer design. The principles of parallel processing and use-dependent modification of connection strength have both been incorporated into modern computers. For example, increased parallelism, such as the use of multiple processors (cores) in a single computer, is a current trend in computer design. As another example, “deep learning” in the discipline of machine learning and artificial intelligence, which has enjoyed great success in recent years and accounts for rapid advances in object and speech recognition in computers and mobile devices, was inspired by findings of the mammalian visual system.8 As in the mammalian visual system, deep learning employs multiple layers to represent increasingly abstract features (e.g., of visual object or speech), and the weights of connections between different layers are adjusted through learning rather than designed by engineers. These recent advances have expanded the repertoire of tasks the computer is capable of performing. Still, the brain has superior flexibility, generalizability, and learning capability than the state-of-the-art computer. As neuroscientists uncover more secrets about the brain (increasingly aided by the use of computers), engineers can take more inspiration from the working of the brain to further improve the architecture and performance of computers. Whichever emerges as the winner for particular tasks, these interdisciplinary cross-fertilizations will undoubtedly advance both neuroscience and computer engineering.

Nautilus Members enjoy an ad-free experience. Log in or Join now .

Liqun Luo is a professor in the School of Humanities and Sciences, and professor, by courtesy, of neurobiology, at Stanford University.

The author wishes to thank Ethan Richman and Jing Xiong for critiques and David Linden for expert editing.

By Liqun Luo, as published in Think Tank: Forty Scientists Explore the Biological Roots of Human Experience, edited by David J. Linden, and published by Yale University Press.

Nautilus Members enjoy an ad-free experience. Log in or Join now .

Footnotes

1. This essay was adapted from a section in the introductory chapter of Luo, L. Principles of Neurobiology (Garland Science, New York, NY, 2015), with permission.

2. von Neumann, J. The Computer and the Brain (Yale University Press, New Haven, CT, 2012), 3rd ed.

3. Patterson, D.A. & Hennessy, J.L. Computer Organization and Design (Elsevier, Amsterdam, 2012), 4th ed.

Nautilus Members enjoy an ad-free experience. Log in or Join now .

4. The assumption here is that arithmetic operations must convert inputs into outputs, so the speed is limited by basic operations of neuronal communication such as action potentials and synaptic transmission. There are exceptions to these limitations. For example, nonspiking neurons with electrical synapses (connections between neurons without the use of chemical neurotransmitters) can in principle transmit information faster than the approximately one millisecond limit; so can events occurring locally in dendrites.

5. Noise can reflect the fact that many neurobiological processes, such as neurotransmitter release, are probabilistic. For example, the same neuron may not produce identical spike patterns in response to identical stimuli in repeated trials.

6. Suppose that the standard deviation of mean (σmean) for each input approximates noise (it reflects how wide the distribution is, in the same unit as the mean). For the average of n independent inputs, the expected standard deviation of means is σmean = σ / √•n. In our example, σ = 0.01, and n = 100; thus σmean = 0.001.

7. For example, dendrites can act as coincidence detectors to sum near synchronous excitatory input from many different upstream neurons. They can also subtract inhibitory input from excitatory input. The presence of voltage-gated ion channels in certain dendrites enables them to exhibit “nonlinear” properties, such as amplification of electrical signals beyond simple addition.

Nautilus Members enjoy an ad-free experience. Log in or Join now .

8. LeCun, Y. Bengio, Y., & Hinton, G. Deep learning. Nature 521, 436–444 (2015).

Lead Art Credits: Photo 12 / Contributor / Getty Images; Wikipedia

close-icon Enjoy unlimited Nautilus articles, ad-free, for as little as $4.92/month. Join now

! There is not an active subscription associated with that email address.

Join to continue reading.

Access unlimited ad-free articles, including this one, by becoming a Nautilus member. Enjoy bonus content, exclusive products and events, and more — all while supporting independent journalism.