ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. or Join now .
Sign up for the free Nautilus newsletter:
science and culture for people who love beautiful writing.
NL – Article speedbump

Consider Data, the android from Star Trek: The Next Generation. Suppose he finds himself on a hostile planet, surrounded by aliens that are just about to dismantle him. In a last-ditch effort to survive, he quickly uploads his artificial brain onto the Enterprise computer. Does Data survive? And could he, at least in principle, do this every time he’s in a crisis, so that he’d be immortal?

Nautilus Members enjoy an ad-free experience. Log in or Join now .

It is common to assume that an AI could achieve immortality by creating backup copies of itself and thus transfer its consciousness from one computer to the next. This view is encouraged by science-fiction stories. But is it right?

The question I am asking is whether an AI could achieve what we might call “functional immortality,” uploading and downloading backups long enough to witness the Big Crunch or even be a spectator in the heat death of the universe, the biggest event of them all.

A program, like a mathematical equation, is not a concrete thing that exists in space or time.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

There has been a good deal of discussion concerning the promise and peril of superintelligent AI, a hypothetical form of AI that is smarter than normal humans in every respect: scientific reasoning, mathematical abilities, and more. Science fiction has long depicted superintelligent AIs as being superbeings in another respect as well. Unlike unenhanced humans, who succumb to biological senescence, AIs can achieve functional immortality through uploading and downloading their program. Inching toward omniscience and immortality, AIs become almost Godlike.

Uploading efforts like Data’s will not do the trick, however. Notice that there is an ambiguity as to whether the expression “AI” refers to a particular AI or to a type of AI program. Consider that the expression “the Tesla Model 3” could mean the particular Tesla in your driveway or it could mean the type of car (i.e., the make and model). The type would endure even after your car was dismantled and destroyed. In a similar vein, we can distinguish the particular AI, Data, from the type of program he runs (the Data program, if you will).

When we are asking about Data’s survival in the above example we are asking about whether this concrete, particular android, Data, can survive until the end of the universe. Data is a particular AI, and as such, he is vulnerable to destruction, just like we are. There may be other androids of this type, but their existence does not ensure the survival of Data, it just ensures that there are other androids that have Data’s type of program.

Here, one might object that if you copy Data’s program precisely, you haven’t merely made a copy of the Data program—you’ve somehow actually made Data himself, all over again. Data’s mind and consciousness transfer. (Here, I am assuming, for the purpose of discussion, that AIs could, at least in principle, be a conscious being and have a mind, an issue I question in my book Artificial You.)

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

This objection relies upon a certain assumption about the nature of the self or mind: the position that the mind (or self) is a software program. You might think that just as you can upload and download a computer file, your mind or self is just a program, and it too can be uploaded and downloaded.

After all, we can tell, by introspecting, that moments pass for us—we are temporal beings.

This view is deeply mistaken. A program, like a mathematical equation, is not a concrete thing that exists in space or time. Nor does it cause events in the world. It is what philosophers call an “abstract entity.” Of course, an equation or software program can be written down, so you might assume that it is in space and time, but this is to confuse the equation or program (a type) with a mere inscription of it (as philosophers say, a “token”). The inscription is on the page, but the equation or program is not.

It would be very odd if selves and minds turned out to be abstract entities like numbers and programs. After all, we can tell, by introspecting, that moments pass for us—we are temporal beings. And our actions cause events in the spatiotemporal world. Abstract entities don’t do anything, and they aren’t anywhere.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

So where does all this leave us? There Data is, on a hostile planet, surrounded by aliens that are about to destroy him. He quickly uploads his artificial brain onto a computer on the Enterprise. Does he survive or not?

In my view, we now have a distinct instance (a token) of the type of mind, Data, being run by the Enterprise computer. We could ask: Can that particular instance or token survive the destruction of the computer by uploading again (i.e., transferring the mind of that token to a different computer)? No. Again, uploading would merely create a different instance (or token) of the same program type. An individual’s survival depends on where things stand at the token level, not at the level of types. Data, despite his name, is not a program.

It is also worth underscoring that a particular AI could still live a very long time, insofar as its parts are extremely durable. Perhaps Data could achieve functional immortality by avoiding accidents and having his parts replaced as they wear out. It would be an extraordinary feat to avoid accidents over a period of several billion years, to say the least. (I suppose, at least in principle, a future human could try to do this as well through innovations in biotechnology.) But my view is compatible with this scenario, because Data’s survival in this case does not happen by transferring his program from one physical object to another.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Susan Schneider is the NASA-Baruch Blumberg chair at the Library of Congress and NASA, and the author of Artificial You: AI and the Future of the Mind.

Lead image originally appeared in “Datalore,” a 1988 episode of Star Trek the Next Generation.

close-icon Enjoy unlimited Nautilus articles, ad-free, for less than $5/month. Join now

! There is not an active subscription associated with that email address.

Join to continue reading.

You’ve read your 2 free articles this month. Access unlimited ad-free stories, including this one, by becoming a Nautilus member — 25% off for a limited time during our seasonal sale.

! There is not an active subscription associated with that email address.

This is your last free article.

Don’t limit your curiosity. Access unlimited ad-free stories like this one, and support independent journalism, by becoming a Nautilus member — 25% off for a limited time during our seasonal sale.