ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. or Join now .
Sign up for the free Nautilus newsletter:
science and culture for people who love beautiful writing.
NL – Article speedbump

One of the core challenges of modern AI can be demonstrated with a rotating yellow school bus. When viewed head-on on a country road, a deep-learning neural network confidently and correctly identifies the bus. When it is laid on its side across the road, though, the algorithm believes—again, with high confidence—that it’s a snowplow. Seen from underneath and at an angle, it is definitely a garbage truck.

Nautilus Members enjoy an ad-free experience. Log in or Join now .

The problem is one of context. When a new image is sufficiently different from the set of training images, deep learning visual recognition stumbles, even if the difference comes down to a simple rotation or obstruction. And context generation, in turn, seems to depend on a rather remarkable set of wiring and signal generation features—at least, it does in the human brain.

Matthias Kaschube studies that wiring by building models that describe experimentally observed brain activity. Kaschube and his colleagues at the Frankfurt Institute for Advanced Studies, the Max Planck Florida Institute for Neuroscience, the University of Minnesota, and elsewhere have found a host of features that stand in stark contrast to the circuits that engineers build: spontaneous activity and correlation, dynamic context generation, unreliable transmission, and straight-up noise. These seem to be fundamental features of what some call the universe’s most complex object—the brain.

Matthias KaschubeCourtesy of the Frankfurt Institute for Advanced Studies
ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

What’s the biggest difference between a computer circuit and a brain circuit?

Our computers are digital devices. They operate with binary units that can be on or off, while neurons are analog devices. Their output is binary—a neuron fires in a given moment or not—but their input can be graded, and their activity depends on many factors. Also, the computing systems that we build, like computers, are deterministic. You provide a certain input and you get a certain output. When you provide the same input again and again, you get the same output. This is very different in the brain. In the brain, even if you choose the exact same stimulus, the response varies from trial to trial.

Where does this variable response in the brain come from?

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

There are various hypotheses. There is, for example, unreliable synaptic transmission. This is something that an engineer would not normally build into a system. When one neuron is active, and a signal runs down the axon, that signal is not guaranteed to actually reach the next neuron. It makes it across the synapse with a probability like one half, or even less. This introduces a lot of noise into the system.

Making the response a little bit noisy may help us to ignore some of the less relevant features.

Another factor is ongoing activity in other parts of the brain. For instance, the visual cortex will be activated by a visual scene, but it also receives a lot of information from other brain areas. Since there’s a lot of cross wiring, these other parts of the brain can affect activity patterns in the visual cortex at any given time. This can modulate incoming signals quite significantly. Part of this modulation might be useful in generating context and encoding expectation. You hear a barking dog, and you turn around, and you are looking for a dog. We understand more and more that part of the brain’s response variability is actually meaningful and contains important background information or context.

What role does spontaneous activity play?

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Even if you don’t have any visual input, the visual cortex is not silent. It shows widespread and strong patterns of activity, which sometimes are as strong as activity evoked with real visual stimuli, and can be similar to it in structure. Given this similarity, spontaneous activity may represent something like visual imagination. You see something but, at the same time, you visually think about what you saw yesterday. Spontaneous activity could contribute to trial-by-trial variability in the brain.

Do we understand the nature of noise in the brain?

There’s debate about whether the fluctuations that we see in experiments are really meaningful and contain information that we don’t understand yet, or whether they’re just noise arising from the stochasticity of biochemical processes and are something that the brain needs to ignore or average out. To arrive at a better model of ongoing fluctuations we have to understand its sources. We can, for instance, do this by looking at animal behavior. Neural activity in animals that are processing a visual stimulus depends on whether they are moving, and whether they are alert. It helps if you can record from many neurons, so that you can understand the contribution to the variability in one part of the cortex by the activity of other parts. We will only be able to understand this variability once we’re able to record from a large part of the brain simultaneously.

Can real noise be useful?

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

It might be useful that the brain is not completely deterministic, that every time we look at the same thing we process it slightly differently. Having slightly different responses to the same stimulus can help us detect different aspects of the scene. However, there are a whole lot of details out there in the world that are irrelevant to us. A visual scene can have hundreds and thousands of features, a lot of which don’t matter. Making the response a little bit noisy may help us to ignore some of the less relevant features. Think about evolution, where random mutations are followed by selection of the fittest. Taking this as an analogy, it could be that the brain adds noise in order to sample different representations of what’s out there. And by exploring the space of potential representations, the brain may try to find the one that is most suited given the current context. Noise could facilitate this search.

When does spontaneous activity arise in the developing brain?

Interestingly, patterns of spontaneous neuronal activity are highly structured already early in development, before there is any structured sensory input. In the visual cortex, for example, such activity appears before the eyes open. After the eyes open, some of these spontaneous patterns become associated with actual visual stimuli. Once this assignment is established, it appears to be stable and may be maintained for life.

Do these early patterns speak to the nature vs. nurture debate?

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Such spontaneous activity patterns are present early on, but we haven’t been able to understand what gives rise to their structure. It could be that this structure is hardwired in circuitry and genetically predetermined, but it appears more plausible that it’s the result of a self-organization process. There are concepts from dynamical systems theory that describe how a self-organizing system can form complex patterns. The starting point would be just a few basic rules of how neurons form circuits and how circuit activity in turn reshapes connectivity, in a feedback loop. Really, it’s less about nature vs. nurture, and more a question of how much nature you actually need to set up the system. You could say the whole brain is in our genes, but this is not possible because the genetic information in our DNA is way too small to determine all of our synaptic connections. What could be genetically encoded are just a few simple rules that set up dynamics that can evolve and generate structure. Largely autonomous at very early stages in development, while more influenced by sensory input at later stages.

What is missing is a notion of context. As context changes, AI must interpret incoming signals in a different way.

How can you probe the genetic component of early spontaneous activity?

One interesting possibility is to look at identical twins. We actually did something along these lines several years ago. We had a study where we looked at littermates in a colony of cats, and we found the spacing between active domains in the visual cortex was more related in littermates than in animals from a different litter. This suggests that there is a genetic component in determining this basic property. The other possibility is to look at an earlier stage in development when these structures are first emerging, and to try to manipulate them.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Is this early spontaneous activity entirely short-range?

We have found through experiment and modeling that, despite the fact that early synaptic connections are purely local, interesting long-range correlated activity can emerge. This would not be surprising in the mature cortex because there you have actual long-range anatomical connections. But even early on, when you don’t have long-range connections yet, you get long-range correlations. Long-range correlations are interesting because they connect different modules that carry out different kinds of processing. For example, when your visual cortex processes different parts of a scene, long-range correlations are likely involved in integrating information across visual space.

Does the brain use error correcting codes?

It’s very plausible. For instance, there’s this old concept of attractor networks. The idea is that a network converges to one of a finite set of activity states. When you provide some input, you reach one of these attractors, and when you provide a nearby input, you reach the same attractor. This makes the network robust against small amounts of input variation and noise. This has been discussed for years, but it’s still hard to get good empirical evidence for or against it in the brain. It would be helpful to get recordings from sufficient numbers of cells under sufficiently stable conditions combined with methods that allow us to directly perturb neural activity.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

It’s less about nature vs. nurture, and more a question of how much nature you actually need to set up the system.

How does modern AI compare to the brain?

Deep neural networks, which are currently the best performing AI in a wide range of tasks, are obviously inspired by brain circuits. Deep networks have neurons, a sort of hierarchy, and plasticity of connections. They may or may not provide a good analogy to what’s actually happening in the first processing stages inside the brain and there is currently an intense debate on this in the field. But one problem with current AI is that it’s highly context-specific. You train your deep network on a data set, and it works for that particular data set, but it doesn’t adjust when you have different data sets. What is missing is a notion of context. As context changes, AI must interpret incoming signals in a different way. This kind of flexibility is a great challenge to current artificial intelligence. Obviously, the brain is able to do that.

How might we solve the context problem?

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

We need to draw more inspiration from the brain. For instance, one interpretation of spontaneous activity in the brain is that it encodes context. It could be useful in AI. Also, the trial-by-trial response variability in the brain could be a hint of what we need to do in artificial intelligence. The same with unreliable synaptic transmission, which is present in the brain, and something analogous is sometimes used in machine learning to avoid over-fitting. This is an interesting direction, and I think there’s a lot more to study here.

Is the spontaneous long-range order that you observed a clue about how context can be generated dynamically?

Potentially, yes. What emerges in the brain early in development could be interpreted as the neural basis of visual representations that involve relationships among different objects and their parts distributed across visual space. A lot of what we need for visual scene processing is to form the right relationship between the object and the parts of the object that we see, and this might be associated with these long-range correlations. This is speculative because understanding the functional implication of these correlations is difficult. But long-range spontaneous order is suggestive, and it intuitively makes sense that correlations between different functional modules in the brain could play a role in scene processing.

Are deep-learning networks wired like the brain?

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

One aspect of deep neural networks that has been criticized a lot is that their connections are typically feed-forward, meaning activity propagates from the input layer through a sequence of intermediate layers until it reaches the final output layer. There are no loops in feed-forward networks. Recurrent connections, i.e. connections between neurons within a given layer, are either absent or modeled in a crude way. Convolutional neural networks do have a convolutional kernel, which acts a little bit like recurrent connections, but there’s relatively little use of more realistic and long-range connections. Also, there aren’t typically any top-down connections that send information back in the direction toward the input layer. Part of the reason why recurrent and top-down connections are avoided is that they make the training of networks more difficult. But in the cortex, top-down connections are abundant and recurrent connections the majority. A feed-forward network is really a crude oversimplification and very distinct from the highly interconnected networks in the brain.

Do deep-learning networks react to stimuli in the same way as the brain?

Even at the first cortical processing stage of visual information, in the visual cortex, connections carrying input from the eyes are by far the minority of total connections, and this is even more drastic deeper inside the brain. A large part of neuronal activity is ongoing cross talk among different brain areas, and sensory input sometimes only appears to play a modulatory role in this internal activity. That’s a very different perspective than the one that you have usually in deep neural networks, in which neurons only get activated, basically, when they are provided with input. So, both anatomically and in terms of functional properties, the brain seems to operate very differently from a deep neural network. There’s still a considerable gap between real intelligence and so-called artificial intelligence.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Michael Segal is Nautilus’ editor in chief.

Lead image: J. Helgason / Shutterstock

For more on Matthias Kaschube and neurological networks, see “Surprising Network Activity in the Immature Brain” on our Max Planck Neuroscience channel.

close-icon Enjoy unlimited Nautilus articles, ad-free, for less than $5/month. Join now

! There is not an active subscription associated with that email address.

Subscribe to continue reading.

You’ve read your 2 free articles this month. Access unlimited ad-free stories, including this one, by becoming a Nautilus member.

! There is not an active subscription associated with that email address.

This is your last free article.

Don’t limit your curiosity. Access unlimited ad-free stories like this one, and support independent journalism, by becoming a Nautilus member.