Like some other futurists, Ray Kurzweil thinks the best way to avoid aging is to avoid biology altogether. With a sufficient understanding of the brain, he says, we’ll be able to upload our minds to (presumably non-organic) structures and become digitally immortal. This might sound plausible enough, if a bit speculative, since the pace of technological advancement can seem surreal. Who’s going to rule out such an idea so early in the game?
Answer: Susan Schneider, philosopher and cognitive scientist at the University of Connecticut and member of the Technology and Ethics study group at Yale University’s Interdisciplinary Center for Bioethics. “It would be silly to claim you can ‘upload’ your consciousness to a computer, the way futurists like Ray Kurzweil envision,” she says. “You wouldn’t be conscious, and it wouldn’t be you. Uploading would not be a route to digital immortality but suicide.”
Tomorrow, in Portugal, she’ll be presenting more of her remarks as the featured speaker at a conference dedicated to her work, put on by the Lisbon Mind and Cognition Group, an organization with ties to the New University of Lisbon. “Whether or not you think our minds are actually computational, our abilities to interface with machines from virtual reality technologies such as Oculus Rift, to our more everyday use of smart-phones and wearable gadgetry, is undergoing a profound shift,” the conference’s website states. “We seek to motivate serious philosophical analysis of these changes and assess their implications.”
Schneider spoke to Nautilus about her thoughts on the mind, the potential promise and danger of AI, as well as the conference in Lisbon.
Why is Ray Kurzweil wrong about what we can do with the mind?
Kurzweil is a true visionary, but he believes that the development of AI will lead to a technological utopia. That may be the case, but I don’t see how it could involve uploading your mind. Digital immortality means that we basically back ourselves up on a computer or use brain chips to replace all parts of the brain. But as I once argued in a brief article in the New York Times, if AIs that seem conscious—like Samantha in the film Her—aren’t actually conscious, then a version of our uploaded minds, despite appearances, may not be conscious either. In other words, we’d forfeit consciousness. On the other hand, if Her-like AIs are conscious, then so may uploaded minds be—but what can guarantee that the uploaded mind would be you, rather than just a digital copy of you that now exists while you’re dead?
In any case, if someone is going to try to do this—there have been very initial steps at the Oxford Future of Humanity Institute, and with the OpenWorm Project—they had better make sure that the type of computer that your brain is supposed to be “uploaded” onto is actually capable of being conscious. Silicon, for instance, may be capable of fast information processing but perhaps not consciousness, like carbon.
The only route to immortality for us would involve the preservation of consciousness. You’d be better off sticking to biological enhancements and perhaps cryogenics to avoid death—but that’s not digital survival. It is too early to tell if machines will be conscious. We could find this out if we developed brain chips that replaced parts of the brain responsible for consciousness and they preserved conscious experience.
Is technology advancing too quickly for humanity’s good?
One wouldn’t want medical technology to advance any slower, and it’s exciting to have so many developments in that field. I do agree with Kurzweil that the next twenty years will probably be marked by the development of Artificial General Intelligence and that superintelligence will follow. We already see signs that AI will change the face of warfare and will be a part of our everyday lives, from self-driving cars to brain enhancements.
However, I would be worried if artificial intelligence advanced far too quickly. The recent successes of Google Deep Mind, such as Alpha Go, coupled with the open sourcing of AI by Elon Musk and others, suggests that superintelligence could be developed, and faster than we think. This is not science fiction—this is science fact. The problem is that it could rewrite its own programming. Indeed, a recent book by the philosopher Nick Bostrom convinced many that there is a “control problem” with superintelligent AI—its design can quickly morph into something that is well beyond human understanding. As a result, superintelligence could be impossible to control. At that point, it wouldn’t be safe for humankind.
Would non-superintelligent AI be any safer?
Probably, but not necessarily. For example, there are already androids being developed for the care of the elderly in Japan. They are nowhere near being AGIs—flexible, domain general artificial intelligences, machines that don’t just excel at chess or Go—but I suspect that they will need to be. Think about what a household assistant does on a daily basis. It is likely that they would need to be AGIs to have the flexible kind of reasoning needed for household management and eldercare. What if someone has an emergency, or gets hurt because the android did something inept? The machines will become increasingly smarter to be more effective.
But if the android is an AGI, could it be conscious? If it is conscious—if it can feel pain and have a range of emotions—then it shouldn’t be our servant. If we treat AI badly, this may come back to haunt us. As in the film I, Robot, they may treat us as we treated them. But if they aren’t conscious, that is a game changer, too. In that case, I don’t worry about exploiting them. It doesn’t feel like anything to be them.
What tales of AI from science fiction do you think we should particularly avoid fulfilling?
There are so many. My all-time favorite is Huxley’s dystopian satire, Brave New World, which warns us of the twin abuses of rampant consumerism and technology in the hands of an authoritarian dictatorship. Brave New World depicts a technologically advanced society in which everyone is complacent, yet where the family has withered away and childbearing is no longer a natural process—something an unfeeling super intelligent AI might very well impose. Instead, children are bred in centers where, via genetic engineering, there are five distinct castes. Only the top two exhibit genetic variation; the other castes are multiple clones of one fertilization. All members of society are trained to strongly identify with their caste, and to appreciate whatever is good for society, especially the constant consumption of goods and, in particular, the mild hallucinogen Soma that makes everyone blissful.
What do you plan on talking about at this month’s conference?
The best way to “sculpt” our minds. As we move farther into the 21st Century, we’ll have opportunities to enhance our experiences, intelligence, and personalities with virtual reality and brain chips. Would you engage in some “cosmetic neurology,” or would you hesitate to change who you are? These brain implant technologies also may help decide whether the mind is “extended,” going beyond the biological brain. Philosophers such as Andy Clark, Robert Clowes, David Chalmers, and others have suggested this is possible. The Defense Advanced Research Projects Agency is already developing brain chips to treat various illnesses, and brain chips are already used for Parkinson’s disease. Suppose, in the future, you are able to replace much of your brain with chips, or even upload these parts onto a computer. Although they may seem like science fiction, the technology is under development already. Would the self be part human, part machine?
This cannot be settled by science alone. They call for an interdisciplinary dialogue, and I’ve urged that philosophy play a key role in these debates, for we can’t answer these questions without deliberating on metaphysical questions about the nature of self, mind, and person. I discuss many of these issues in a documentary on my work (shown below).
Cole Little is a freelance writer who attends Clemson University.