logo

Will We Reverse-Engineer the Human Brain Within 50 Years?

Gary Marcus can’t understand why people are shocked when he calls the brain a computer. The 43-year-old professor of psychology at New York University, author of Kluge, about the haphazard evolution of the brain, and a leading researcher in how children acquire language, grins and says it’s a generational thing.

“I know there’s a philosophical school of dualism that says there’s some kind of spirit separate from body, which creates thought detached from the brain,” he says. “But for someone like me who grew up reading neuroscience and cognitive science, it’s unsurprising the brain is a computer. It’s how I’ve always understood it to be.”

Marcus’s most recent book, Guitar Zero, charts his quest to learn to play the guitar as he approached 40. It delves into the brain’s perpetual ability to learn new things, subverting the myth that our brains are practically cast in stone by middle age. In recent years, Marcus has been perplexed by the fact it’s nearly impossible to discern how the human brain differs from that of other primates. He details his insights in his bracing Nautilus essay, “Where Uniqueness Lies.”

We recently caught
up with the ardent writer and professor, who speaks in sentences that
race by like a Top Fuel dragster, in his NYU office. Below this video is a transcript. 

How
are our brains like a computer?

I
think our brains aren’t just like a computer. I think our brains
are a
computer. The question is what kind of computer is it?

There
are lots of people that get agitated when I say that. They say, “Hey,
we’re creative, and machines aren’t, so therefore we must not be
computers.” But that’s not a good argument. What a computer does
fundamentally is it takes in information, it transforms that
information, maybe connects it with some information in memory, and
it builds an action based on that. Your iPhone is effectively a
computer, your laptop is a computer, a pocket calculator—not that
we have those anymore—those are all computers. But your brain is a
computer too.

For
example, take information from your retina. It gets changed into
patterns of chemical and electrical activity. The brain is doing
information processing. There’s no other description than that. And
it’s then integrated with information that’s stored in your
memory, and then it controls your actions. So in a sense that is a
computer information processor. The brain isn’t just like a
computer, it is a computer.

But
there are lots of different computers that have different structures.
Over time, computers, for example, are starting to have more parallel
processing integrated in them. So if you look at an Xbox, which is a
kind of computer, or a PlayStation, they have these heavy duty chips
that do graphics processing at the same time as they’re doing
computing. You know you have you gone to the next level. They’re
also using these chips that calculate the next frame and an image,
and so there’s lots of parallel processing going on in modern
computers. But not the kind of computers that I learned about when I
was a kid. I learned on a Commodore 64 and there was not a lot of
parallel processing going on there—a tiny bit. And the trend in
computers is to have more parallel processing.

Well,
that’s clearly true in the brain. It’s a lot of parallel
processing but it’s not all parallel. Sometimes people think
everything in the brain happens simultaneously, but there are some
sequential things that happen step-by-step in the human brain.
Language is outputted in a sequential way, word by word. Even though
I talk fast, you know it’s still linear, one word at a time. So
there are things that are linear and sequential in the human brain.

But
we do a lot of parallel processing. That’s different from older
computers, it’s not as different from newer computers. There are
particular computations that we do very well that no one has figured
out how to get a computer to do. We are better at learning language
maybe because it’s integrated with a lot of real world knowledge.
We are better at understanding common sense. So if I spill a cup of
coffee, you don’t have to, as a human being, project every single
individual molecule to tell me that it’s going to wind up on the
floor, right? And for computers to do physics right now, we need to
have complete information about every molecule. That’s very
different from the way people view things. So there are different
kinds of computations that we do. But you wouldn’t want to say that
a Windows computer and a Macintosh computer aren’t both computers,
just because they do different computations and have different
operating systems. They both, at their core, do the same kinds of
computations and they just get arranged in different ways.

What
can a computer do better than the brain?

Computers
can do lots of things better than brains. In fact, there’s an issue
about whether 50 years from now there’ll be anything that brains
can’t do better. But for now, computers are better at calculation
and anything [involving] raw mathematics or anything requiring a lot
of repetition. This is why you know accountants get replaced largely
by machines. They’re simply better at repetitive calculation than
humans. There’s no human that is ever going to be able to catch
computers in that regard. This also means that computers can play
chess better than all the people in the world, if you build the right
kind of computer. My cell phone doesn’t quite have enough
computational power to beat Gary Kasparov. But you can make a custom
computer that can, and has in fact, beat him. So anything in pure
computation, machines are going to be better.

There
are some things that we’re still better at. We’re still more
creative. We still have more common sense. We’re still better at
language. But nobody is making the argument. At least I’m certainly
not making the argument that machines will never catch us on any of
those things.

There
are some things we figured out how to build the software. And what’s
interesting about that is similar things that machines do better than
us, they do in ways that are like us. But they just do them more
reliably. So you could teach a computer—this is a sort of
fictitious example—but you could teach a computer to do long
division the way that a human brain does. There are actually better
ways to do it. But you could do it that way, and a computer would do
it over and over again, never make a mistake, never forget to carry
the one, and things like that. Whereas human beings, they’re not
that reliable. They may get things right 90 percent of the time. So
you could teach the computer to do it exactly the way that a person
does, and it would just do it better because it’s more reliable.

Is
the process by which infants develop language similar to how we
program a computer to learn language?

Good
question. Children learn language in what appears to be a very
natural way without any explicit lessons, without any corrective
feedback. They got a right or wrong, they mostly just listen to us.
If Chomsky is right, if Steven Pinker is right, they have some
internal machinery that helps them to zero in from that information
onto the correct grammar. Chomsky phrase for it is “a language
acquisition device.” Pinker calls it “a language instinct,” and
I think they’re probably right. There is innate machinery that
helps us to acquire language. Some of it may be special to language,
which is what those guys have been suggesting, some of it may not be
special to language. Some of it might be just general facts about how
our memory works.

But
either way, I think we’re born to learn language, and we do so in a
pretty effortless way, and a pretty robust way. Which is to say that
pretty much everybody learns language regardless of their
environment.

So
there are some things you have to be exposed to, like reading. If
you’re not exposed to reading, you won’t learn, and if you don’t
have explicit lessons, you probably won’t learn to read. It’s not
universal across cultures, it’s not universal within our culture.
Reading is a hard-won skill that we acquire, whereas language is
something that happens pretty much automatically.

Now,
when you look at machines trying to acquire language—and this is a
project for places like Google and Microsoft and Apple and sort
forth—the first thing to say is none of those guys have succeeded.
So if you look at something like Google Translate, it works some of
the time, and sometimes it comes back with things that don’t look
at all like English. So it’s a really hard problem. I don’t mean
to knock Google here. You know, Apple, Microsoft—all these guys are
working on it. But nobody has really solved it. The techniques that
they’re using are different. So when a child learns a language,
what they’re doing, I think, is trying to connect, they’re kind
of mind-reading, they guess about what you’re thinking about with
their understanding about grammar. Machines for the most part are
looking at lots of sentences, maybe looking at pairs of sentences,
like something that’s in the English version of the Canadian
parliament record, with something in the French, and matching them
together. They don’t really have access in these machines to an
understanding of the world that is that similar to what a child does.
So I think there is certainly work to try to put in more semantics,
more meaning, into these machines, as they acquire language. So
nobody has really built a machine that I would say closely parallels
the acquisition patterns of a child.

You
say in 50 years we may be able to reverse-engineer the brain. Is that
because we will have at one point discovered the calculations to
produce cognition?

Well,
that’s what we want to figure out. I mean everybody in cognitive
neuroscience wants to figure out how the brain works. I mean, some of
the work that’s actually done is crude. So instead of telling us
how the brain exactly does its computation, we learn where it does
its computation. So this will be sort of like trying to explain
politics by saying, “Oh, it all happens in the Capitol building.”
And it’ll be sort of true, and sort of not true, and not very
useful. I mean, knowing where Congress is doesn’t tell you the
power dynamics, right? So we’re sort of at that level right now. We
know where the brain does its computations, we don’t know how it
does those computations. But that’s what we’re all trying to
figure out. We all want to know, What is the relation between what
individual neurons do and how we actually behave? And again, there’s
no reason in principle we can’t figure that out. But there are so
many moving parts that it’s difficult to just sort of guess at it.

It’s
actually a lot more complicated than, say, the structure of DNA,
which was already a hard problem. But if you look at the structure of
DNA, you’ve got the same molecule appearing in every cell, and you
can do these neat crystallography techniques that yielded very clear
results. I don’t want to say it was easy, or that I would have
figured it out, but it’s a tractable problem. Figuring out how the
brain works is a very, very complicated problem, where you know every
neuron works differently, there’s lots of interdependence between
different neurons. So it’s going to take a while.

But
on the other hand, we know that the brain is where all the action is
for human thought. We know if you injure parts of the brain then you
change thinking. So there’s no question that the secrets to human
thought do lie within the brain. It’s a question of what are the
techniques and what are the theoretical insights that are going to
allow us to unravel this puzzle.