ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. or Join now .
Sign up for the free Nautilus newsletter:
science and culture for people who love beautiful writing.
NL – Article speedbump
Explore

Everyone is now talking about AI, but few have the faintest glimmer of what is about to hit them.” That’s a quote from Leopold Aschenbrenner, a San Francisco-based AI researcher in his mid 20s who was recently fired from OpenAI and who, according to his own website, “recently founded an investment firm focused on artificial general intelligence.” Aschenbrenner, a former economics researcher at Oxford University’s Global Priorities Institute, believes that artificial superintelligence is just around the corner and has written a 165-page essay explaining why. I spent the last weekend reading the essay, “Situational Awareness: The Decade Ahead,” and I now understand a lot better what is going on, if not in AI, then at least in San Francisco, sanctuary of tech visionaries.

Nautilus Members enjoy an ad-free experience. Log in or Join now .

Let me first give you the gist of his argument. Aschenbrenner says that current AI systems are scaling up incredibly quickly. The most relevant factors that contribute to the growth of AI performance at the moment are the increase of computing clusters and improvements of the algorithms. Neither of these factors is yet remotely saturated. That’s why, he says, performance will continue to improve exponentially for at least several more years, and that is sufficient for AI to exceed human intelligence on pretty much all tasks by 2027. We would then have artificial general intelligence (AGI)—according to Aschenbrenner.

I agree that it won’t be long now until AI outsmarts humans.

It is maybe not so surprising that someone who makes money from AGI coming up soon argues that it will come up soon. Nevertheless, his argument is worth considering. He predicts that a significant contribution to this trend will be what he calls “unhobbling” of the AI models. By this he means that current AIs have limitations that can easily be overcome and will soon be overcome. For example, a lack of memory, or that they can’t themselves use computing tools. Algorithms are also likely to develop away from large language models soon to more efficient learning methods. (Aschenbrenner doesn’t mention it, but personally I think a big game changer will be symbolic reasoning, as good reasoning is basically logic, and we need more of it.)

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

So far, I agree with Aschenbrenner. I think he’s right that it won’t be long now until AI outsmarts humans. I believe this not so much because I think AI are smart but because we are not. The human brain is not a good thinking machine—I speak from personal experience: It’s slow and makes constant mistakes. Just speeding up human thought and avoiding faulty conclusions will have a dramatic impact on the world.

I also agree that soon after this, artificial intelligence will be able to research itself and to improve its own algorithms. Where I get off the bus is when Aschenbrenner concludes that this will lead to the “intelligence explosion”—formerly known as the “technological singularity”—accompanied by extremely rapid progress in science and technology and society overall. The reason I don’t believe this is going to happen is that Aschenbrenner underestimates the two major limiting factors: energy and data.

Let us first look at what he says about energy limitations. The training of AI models in terms of computing operations takes up an enormous amount of energy. According to Aschenbrenner, by 2028 the most advanced models will run on 10 gigawatts of power at a cost of several hundred billion dollars. By 2030, they’ll run at 100 gigawatts at a cost of a trillion dollars.

For context, a typical power plant delivers something in the range of 1 gigawatt or so. So that means building 10 power plants in addition to the supercomputer cluster by 2028. What would all those power stations run on? According to Aschenbrenner, on natural gas. “Even the 100 [gigawatt] cluster is surprisingly doable,” he writes, because that would take only about 1,200 or so new natural gas wells. And if that doesn’t work, I guess they can just go the Sam Altman-way and switch to nuclear fusion power.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Then there’s the data. Currently the most common AIs—large language models like GPT and Meta’s Llama—have already been trained on much of the data that is available online. Absorbing the likes of Wikipedia and Google Books was the easy part. Getting new data is much harder. Of course, there is new data on the Internet every day, but that’s not substantial compared to what is already there, and it’s for the most part not information that AIs will need to become better—it’s just the information they need to be up to date. They need data of the kind that is, for example, stored in people’s brains, what philosophers call “tacit knowledge.” You need properties of physical objects that you can’t extract from video footage. So, even if your algorithms get better and learn faster all the time, a computer can’t learn from what isn’t there.

Frontier research tends to overestimate the pace at which the world can be changed.

No problem, Aschenbrenner says. You deploy robots who collect novel real-world data. Where do you get those robots from? Well, Aschenbrenner thinks that AIs will “solve robotics,” meaning presumably any remaining robot problems (like recognizing objects in any environment, and performing all manner of tasks successfully without human intervention). And the first AI-created robots will build factories to build more of these robots. “Robo-factories could produce more robo-factories in an unconstrained way, leading to an industrial explosion,” he writes. “Think: self-replicating robot factories quickly covering all of the Nevada desert.”

Alright. But what will they build the factories with, I wonder: Resources that will be mined and transported by—let me guess—more robots? Perhaps those will be built in the factories constructed from the resources mined by the robots. Do you see the problem?

What Aschenbrenner misses is that creating 100-gigawatt supercomputing clusters or huge robot workforces will not just require AGI. It will require changing the entire world economy, the products and services that it provides. You can’t ramp up the production of one high-end product without also ramping up the production of all components that contribute to it. It requires physical changes, stuff that needs to be moved, plans that need to be approved, people who have to do things. And everything that needs to be done by people is very slow. There’s a reason why CERN spent $20 million and took years just on a plan to build their next bigger collider before even doing anything.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Unlike the Large Hadron Collider, which is 17 miles in circumference, a 100-gigawatt supercomputing cluster itself probably wouldn’t be all that large—in fact, you want to keep it compact because the larger it gets, the more you have to transport data around. But the size of the plant that would power the 100-gigawatt supercomputing cluster depends strongly on the energy you use to supply it. Natural gas power plants tend to be relatively small, while nuclear power tends to take up more real estate (because of safety requirements). Wind and solar power farms take up even more terrain. Nuclear fusion is inherently a compact energy source, but since we don’t have any working fusion power stations, how little space it would take up is anyone’s guess.

Leaving aside that climate change is about to crush the world economy, the robot revolution will happen, eventually, but not within a couple of years. It’ll take decades at best. One must have spent a lot of time group-thinking in San Francisco and Oxford to lose touch with the real world so much that one can seriously think it’s possible to build a 100-gigawatt supercomputing cluster and a robot work force within six years.

That said, I think Aschenbrenner is right that AGI will almost certainly be able to unlock huge progress in science and technology. This is simply because a lot of scientific knowledge currently goes to waste just because no human can read everything that’s been published in the scientific literature. But AGI will. There must be lots of insights hidden in the scientific literature that can be unearthed without doing any new research whatsoever.

For example, it could find new drugs by understanding that a compound which was previously unsuccessful in treating one illness might be good for treating another. It could see that a thorny mathematical problem in one area of science was previously solved in another. It might find correlations in data that no one ever thought of looking for, maybe settling the debate of whether dark matter is real or finding evidence for new physics. If I had a few billionaire friends, that’s what I’d tell them to spend their bucks on.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

The second half of Aschenbrenner’s essay is dedicated to the security risks that will go along with AGI, and I largely agree with him.

Most people on this planet, including all governments, currently dramatically underestimate just how big an impact AGI will make, and how much power a superintelligence will give to anyone in possession of it. If they appreciated its future impact, they would not let private companies develop them, basically unrestricted. Once they wake up, governments will rapidly try to gain control of whatever AGI they can get their hands on and put severe limitations on its use.

Let me stress: It’s not that I think governments restricting AI-research is good, or that I want this to happen—I merely think this is what will happen. For practical purposes, the quasi-nationalization of AI will probably mean that high-compute queries, like overthrowing the United States government, will require security clearance.

Aschenbrenner also discusses the super-alignment problem—that it will be basically impossible to make sure an intelligence that is vastly superior to our own will “align” with our values. While I agree that this is a serious problem that requires consideration, I think it’s not the most urgent problem right now. Before we worry about superintelligent AI trying to rule the world itself, we need to worry about humans trying to abuse it to rule the world.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

What can we extrapolate from a trend of wrong predictions? In 1960, Herbert Simon, a Nobel Prize and Turing Award winner, speculated that “machines will be capable, within 20 years, of doing any work a man can do.” In the 1970s, cognitive scientist Marvin Minsky predicted that human-level machine intelligence was just a few years away. In a 1993 essay, computer scientist Vernor Vinge predicted that the technological singularity would occur within 30 years. 

What I take away from this list of failed predictions is that people involved in frontier research tend to vastly overestimate the pace at which the world can be changed. I wish that we actually lived in the world that Aschenbrenner seems to think we live in. I can’t wait for superhuman intelligence. But I’m afraid the intelligence explosion isn’t as near as he thinks.

Lead image: Toey Andante / Shutterstock

close-icon Enjoy unlimited Nautilus articles, ad-free, for less than $5/month. Join now

! There is not an active subscription associated with that email address.

Join to continue reading.

You’ve read your 2 free articles this month. Access unlimited ad-free stories, including this one, by becoming a Nautilus member — 25% off for a limited time during our seasonal sale.

! There is not an active subscription associated with that email address.

This is your last free article.

Don’t limit your curiosity. Access unlimited ad-free stories like this one, and support independent journalism, by becoming a Nautilus member — 25% off for a limited time during our seasonal sale.