Resume Reading — The Philosopher Who Says We Should Play God

Close

The Philosopher Who Says We Should Play God

Why ethical objections to interfering with nature are too late.

Australian bioethicist Julian Savulescu has a knack for provocation. Take human cloning. He says most of us would readily accept it…By Steve Paulson

Australian bioethicist Julian Savulescu has a knack for provocation. Take human cloning. He says most of us would readily accept it if it benefited us. As for eugenics—creating smarter, stronger, more beautiful babies—he believes we have an ethical obligation to use advanced technology to select the best possible children.

A protégé of the philosopher Peter Singer, Savulescu is a prominent moral philosopher at the University of Oxford, where he directs the Uehiro Centre for Practical Ethics. He also edits the Journal of Medical Ethics. Savulescu isn’t shy about stepping onto ethical minefields. He sees nothing wrong with doping to help cyclists climb those steep mountains in the Tour de France. Some elite athletes will always cheat to boost their performance, so instead of trying to enforce rules that will be broken, he claims we’d be better off with a system that allows low-dose doping.

So does Savulescu just get off being outrageous? “I actually think of myself as the voice of common sense,” he says, though he admits to receiving his share of hate mail. He’s frustrated by how hard it is to have reasoned arguments about loaded issues without getting flamed on the Internet. Savulescu thinks we need to become far more adept at sorting out difficult moral issues. Otherwise, he says, the human species will face dire consequences in the coming decades.

I caught up with Savulescu in Australia, where he was on sabbatical. We talked about a wide range of looming ethical issues, from new technology that will change how we’re born and how we die, to transhumanism, to how the world might end.

The candid philosopher: “I actually think of myself as the voice of commonsense,” says Julian Savulescu. “If you actually looked at things without any kind of baggage, you’d view them like me.” University of Oxford


An Appetite for Innovation

To view the video, click the “play” icon above. How many Harvard professors have you heard of who have their own restaurant, much less one with a gastronomy manifesto? Once you get to know David Edwards, a biomedical engineer and...READ MORE

What ethical challenges are raised by new technologies like genetic engineering and human cloning?

People will vote with their feet once those technologies offer significant benefits. At the moment they have concerns about nature or God, but that will change if you can double somebody’s lifespan with genetic engineering, which we’ve done in animals. People will use genetic engineering if you can ensure that your child won’t get Alzheimer’s disease or Parkinson’s disease or diabetes. When it offers spare organs and the cure of aging, then of course it will be used.

Human cloning is now off the table. Will that change?

Cloning of farm animals is routine, and cloning in humans is used to produce stem cells for the treatment of disease. It’s now possible to clone a human being. You can split an early human embryo into identical twins. That’s safe and it’s reasonably efficient. You could freeze one of those identical twins and then implant it some years after the first, so you could have identical twins 10 years apart in age. So that technology is already there. It’s not done because there’s no clear point to it, apart from curiosity or the hubris of a scientist. But once there is a real need, people will see the benefits.

Why would we ever need to do this?

Imagine women having children later and later, even after not being able to have children with in vitro fertilization. Let’s say you’ve got one embryo left and that last embryo was implanted. Then you’re in a car accident and about to lose the pregnancy from bleeding. You could take a cell from that embryo and clone another embryo if that pregnancy was lost. It would give you the chance to have your own child. So one of the lessons of ethics is you can’t make general pronouncements—for instance, that cloning is always unethical and must be banned under all circumstances.

So you don’t see any fundamental ethical objection to human cloning?

In reality, hardly anybody does. Remember that 1 in 300 pregnancies involves clones. Identical twins are clones. They are much more genetically related than a clone using the nuclear transfer technique, where you take a skin cell from one individual and create a clone from it.

But twins are not something we engineer. That just happened.

One of the big mistakes in ethics is to think that means make all the difference. The fact that we’ve done it or nature has done it is irrelevant to individuals and is largely irrelevant to society. What difference would it make if a couple of identical twins come not through a natural splitting of an embryo, but because some IVF doctor had divided the embryo at the third day after conception? Should we suddenly treat them differently? The fact that they arose through choice and not chance is morally irrelevant.

Why should we treat a genetic mechanism differently than a dietary supplement or some external technology like the Internet?

So the idea that we could play god and tamper with the laws of nature, creating things that wouldn’t otherwise exist, is a red herring?

We’re playing god every day. As the English philosopher Thomas Hobbes said, the natural state for human beings is a life that’s nasty, brutish, and short. We play god when we vaccinate. We play god when we give women pain relief during labor. The challenge is to decide how to change the course of nature, not whether to change it. Our whole life is entirely unnatural. The correction of infertility is interfering in nature. Contraception is interfering in the most fundamental aspect of nature.

But using condoms has nowhere near the ethical complications of altering the genetic makeup of your future baby.

You alter the genetic makeup of your future baby when you smoke or drink alcohol. Viruses alter the human genome. So why would you single out one intentional act aimed at producing a beneficial outcome from all these other events that have far less beneficial outcomes? In my view, we should not only use tests to look for genes so a child is not disposed to a major genetic disorder, like Thalassemia or Cystic Fibrosis or Down syndrome, but also to look at genes correlated with greater advantages in life. My argument is we ought to select children who have opportunities for better lives. Most people say that’s fine when it comes to diseases, but we shouldn’t interfere in nature once you get into the healthy range.

This raises the specter of tinkering with our genes. You could create smarter, stronger, more beautiful children.

Indeed, you could. In my view, we should choose genes if those characteristics affect a person’s happiness. A rising percentage of kids today are on Ritalin for Attention Deficit Hyperactive Disorder. But that’s not because there’s suddenly been some epidemic of ADHD. It’s because you’re crippled as a human being if you have poor impulse control and can’t concentrate long enough, if you can’t defer small rewards now for larger rewards in the future. Having self-control is extremely important to strategic planning, and Ritalin enhances that characteristic in children at the low end of impulse control. Now, if you were able to test for poor impulse control in embryos, I believe we should select ones with a better chance of having more choices in life, whether you want to be a plumber, a taxi driver, a lawyer, or the president.

It’s one thing to talk about impulse control and quite another to enhance the intelligence of a baby. Doesn’t this raise a whole new level of ethical concerns?

It does raise another level of ethical concerns, but we already aim to enhance intelligence through education. Computers and the Internet are also cognitive enhancers. We give children food supplements and better diets to enhance cognitive ability. So why should we treat a genetic mechanism differently than a dietary supplement or some external technology like the Internet? The only difference is gene therapy is really risky, and that’s why we don’t do it. But if it becomes safe, there’s no difference in ethical terms between gene therapy and any other sort of biological or social intervention. If science gives us the opportunity of improving people’s lives, we should use it.

Won’t the rich have much more access to creating smarter and more beautiful children than the poor?

It could massively increase inequality. We need to create some kind of safety net for people, rather than just ramping up the current trend of ever-increasing inequality. Although the standard of living for many people has increased, in the 1800s the difference between the richest and poorest country was 3 to 1. It’s more than 100 to 1 today, and the richest three individuals in the world own as much as the poorest 600 million people. So some kinds of ethical constraints are going to have to be placed on unconstrained capitalism. We’re in a period where capitalism has served us very well. My father escaped from Romania after World War II to escape communism. I wouldn’t change that history. But we can’t think capitalism is the end of history. We will need rules to constrain the dark sides of our nature. The market is not going to solve our biggest problems.

Do you worry about eugenics—creating superior groups of people?

People concerned about eugenics remember the Nazi program of sterilization and the extermination of people deemed to be unfit. Now it’s important to recognize this wasn’t unique to Nazi Germany. The extermination part was, but sterilization was common through Europe and the United States. Many states in the U.S. had eugenics laws so people who were intellectually disabled or mentally ill were sterilized against their will. This kind of eugenics was one of the darker sides of the 20th century.

But eugenics just means having a child who is better in some way. Eugenics is alive and well today. When people screen their pregnancies for Down syndrome or intellectual disability, that’s eugenics. What was wrong with Nazi eugenics was that it was involuntary. People had no choice. People today can choose to utilize the fruits of science to make these selection decisions. Today, eugenics is about giving couples the choice of a better or worse life for themselves.

We’ve talked about new reproductive technology. Do we also need to rethink the ethics of how people die?

There are two aspects that we’ll have to confront. One we’re already confronting—how we die—which I think is ethically uninteresting. Of course people should be allowed to decide when and how they exit this world. The reason we have laws against it are either religious or based on arcane, outdated laws, like your body belonged to the King and you couldn’t render it unfit for fighting! Now, these are quite inappropriate in a secular society. If I want to end my life and someone else wants to help me, what business is it of the state or other people to interfere?

So what’s the interesting question about death?

The interesting question is how long we should live. At the moment we’ve pretty much maxed out what we can do with treating cardiovascular disease or cancer. But if we could attack aging, which is the real disease that causes adult onset cancer and cardiovascular disease, stroke and diabetes, people could live healthily for 200 years or longer. Then we’ll face the deep question, how long should we live? How many people should there be? How we will pay for people living to 150? How will younger people carve out a place in society? Will life become boring? These are really deep and difficult questions. Is this something that people should be able to choose, or should we place termination criteria on how long people can live? It may be that our death starts to become not just our choice, but society’s choice. Is it better to have a society with 500 million people living to 80, or 250 million people living to 160? Those are difficult questions that we may well have to decide. This idea that we’ll just leave it to the market to resolve is not going to wash.

We’re not the kind of animal that’s designed to live in the world that our enormous cognitive capacity has created.

Would you like to live 200 or 500 years?

I want to live as long as possible. I don’t see anything being there afterward! I want to live in as bad a condition for as long as possible.

So you’re not one of these people who thinks the prospect of death somehow gives life meaning?

No, not at all. The prospect of failure gives life meaning. The reality is people are often prepared to embrace death when it’s not staring them in the face. Some people choose euthanasia not because they want death, but because they don’t want any longer the poor quality of life they have. But if you’re in full health, there are very few people who actually want to die just because they’ve lived too long. I think the challenge is to continue to reinvent yourself and your life. You’re already seeing people today having two or three careers, two or three families during their lives, and they don’t say they’ve had enough. I want to go on as long as possible.

What do you make of Ray Kurzweil and the transhumanists who think there will be some sort of singularity—a merging of human and machine that leads to an entirely new species in a post-human future?

I have some sympathy for them and I think it’s great that they’re out there pushing that line of argument. I’m not a transhumanist or a post-humanist. I think it starts to take on characteristics of a religion and is a kind of belief in itself. But the ideas are interesting and need to be taken seriously. I wouldn’t put all my eggs in their basket, but I’d put some eggs in their basket. The capacity for technology to increase in power is exponential; the capacity of humans to control it doesn’t increase exponentially. We have to realize that the technology we’ve created has reached a point of being runaway.

If we speculate about how the ethical landscape might change by the year 2050, what do you see as the biggest challenges ahead?

We’re in a very critical period. We’ll either learn to live with people across the world or we’ll face extinction. We’ve evolved in groups of 150, and to some degree we’ve managed to extend that to nation states. But what you see now is the ability of individuals or small groups to challenge those larger groups. They haven’t yet used weapons of mass destruction such as biological weapons, but within a decade or two those weapons will be in the hands of hundreds of thousands of people. The idea that we can continue to maintain order at a national level but not at an international level is untenable.

So our biggest threat is renegade terrorists with weapons of mass destruction?

I think there are two threats: single individuals or groups using weapons of mass destruction, and the limitations of our moral dispositions as we face problems of collective action. Climate change is not a problem caused by a single individual, but by whole groups. It requires coordination to solve. Historically we could solve those problems when we were in small groups. If we saw other farmers overgrazing and depleting a communal resource, we could punish them. But when it comes to issues like climate change, depletion of resources, global inequality, or the threat of pandemics, we can’t see our own contribution in the same way. Our psychology is a barrier to dealing with collective problems.

Because we evolved in small groups and people outside our tribes were potential enemies. You’re saying we need to get past that psychology?

Yes. Racism is implicit. It’s built in. If you study people’s dispositions, they identify out-group members at a subconscious level and behave differently toward them. That’s not to say we can’t overcome those biases and prejudices through laws or moral education. But we do face a significant challenge. We’re not the kind of animal that’s designed to live in the world that our enormous cognitive capacity has created—global interconnectivity and massively advanced technology. We’re entering a new phase where the rules and codes governing our behavior are no longer suitable. The deeply difficult question comes when we face the moral challenges of becoming less prejudiced and less racist.

There are different kinds of threats. Cosmologists worry about an asteroid hitting us. Nick Bostrom says artificial intelligence could become so sophisticated that it wipes us out. Are you talking about something else?

I think we are the biggest threat to ourselves. The elephant in the room is the human being. For the first time in human history we really are the masters of our destiny. We’ve got enormous potential to have unprecedentedly good lives. We’ll be able to live twice as long. With our computers and the Internet, we already are smarter than any of our predecessors. But we also have the possibility to completely shackle ourselves, if not destroy ourselves. The Internet is a good example. In George Orwell’s 1984, Big Brother was placing us under surveillance, controlling and censoring everything that happened. In some ways we already are under surveillance. But my worry is not the government—at least not in the U.K. or the U.S.; it’s each other. As soon as we publish something, it’s immediately pumped around the Internet to every fanatical group, which then mobilizes within minutes and creates such momentum that it doesn’t matter what you said or what the truth is; what matters is the perception. So we now live under a kind of censorship of each other and that’s just going to increase.


Steve Paulson is the executive producer of Wisconsin Public Radio’s nationally syndicated show To the Best of Our Knowledge. He’s the author of Atoms and Eden: Conversations on Religion and Science. You can subscribe to TTBOOK’s podcast here.

Join the Discussion