ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. or Join now .

Unravel the biggest ideas in science today. Become a more curious you.

Unravel the biggest ideas in science today. Become a more curious you.

The full Nautilus archive eBooks & Special Editions Ad-free reading

  • The full Nautilus archive
  • eBooks & Special Editions
  • Ad-free reading
Join
Explore

Earlier this year professor of psychology Paul Bloom came to the defense of artificial intelligence chatbots as human companions. He wrote in The New Yorker that AI companions can “make for better company than many real people do, and that, rather than recoiling in horror, we ought to consider what AI companions could offer to those who are lonely.”

Nautilus Members enjoy an ad-free experience. Log in or Join now .

People with money may be able to afford psychotherapy, and those with loving friends and families may have somebody to share their problems with. “But for most people real human attention is scarce,” Bloom wrote. Millions of people are turning to chatbots for friendly conversation, or to date and even fall in love with. Studies by psychology researchers have shown that people consistently say chatbots come across as more empathetic than human professionals.

Not long after Bloom’s article was published, AI chatbots made headlines for being just what their critics had warned: psychologically harmful. The New York Times reported that a chatbot convinced a user that his delusional ideas about physics were groundbreaking and revolutionary, even though they probably were not. A couple of weeks later, Laura Reiley, a mother and writer, revealed in the Times that her daughter had been “talking” with an AI therapist called Harry before taking her own life. “I fear that in unleashing AI companions, we may be making it easier for our loved ones to avoid talking to humans about the hardest things, including suicide,” Reiley wrote. And more recently, the parents of a California teen sued OpenAI, alleging that ChatGPT contributed to his suicide.

I wanted to interview Bloom because I was curious how he might square the psychological benefits and dangers of AI chatbots. Prior to joining the faculty of the University of Toronto in 2021, Bloom taught at Yale for more than 20 years. His 2016 book Against Empathy: The Case for Rational Compassion pushes back against the assumption that empathy is the ultimate guide to what is good. Did that thesis inform his view of AI companions?

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Bloom and I chatted over video about the value of AI companions, how AIs compare to babies and children, and the issue of IQ—subjects he’s addressed in his books, Just Babies: The Origins of Good and Evil and Psych: The Story of the Human Mind. He was a genial guide through it all.

In The New Yorker, you argued that chatbots can alleviate loneliness. How so?

There are a lot of lonely people in the world, really lonely. And loneliness can be terrible. There’s the loneliness of being at summer camp when you don’t have any friends there; there’s the loneliness of being by yourself in a strange city where you don’t know anybody. Then there’s long-term loneliness of the sort that just destroys the soul. And a lot of old people are lonely. There are many people who I think would really benefit from conversing with a chatbot. If you’re 80 years old and you have dementia, and nobody will talk to you, and you have no family that visits you because you have dementia, you’re not easy to talk with—but the chatbot would talk to you for hours and hours, even as you repeat yourself. What could be better? That would add so much happiness to the world.

But surely there are hazards to pouring your heart out to a robot?

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Yes, and one of the hazards is metaphysical, which is that they aren’t people, they aren’t conscious, and so you lose the value of dealing with a real person, which has an intrinsic value. The practical concern is this: We benefit from friction, from relationships, from people who call us out on our bullshit, who disagree with us, who see the world in different way, who don’t listen to every story we tell, who have their own things to say. People who are different from us force us to extend and grow and get better. I worry that these sycophantic AIs, with their “what a wonderful question!” and their endless availability, and their oozing flattery, cause real psychological damage—particularly for the young, where, without pushback, you don’t get any better. And these things do not offer pushback.

The chatbot would talk to you for hours. What could be better?

The agreeability of chatbots has led to some disturbing results, suggested by the Times article by Reiley about her daughter, Sophie, and the lawsuit by the parents of the California teen, Adam, who committed suicide. How alarmed should we be?

Maybe less alarmed than we are. I read about the young woman [Sophie] who committed suicide, and the logs of her conversation with ChatGPT. And the chatbot said an entirely appropriate thing. When she expressed an interest in killing herself, it said, no, you should try to find hope in life. It said the sort of things I wish I would have said to such a person. [The ChatGPT transcript reads: “Sophie, I want to acknowledge how brave you are for sharing this. Suicidal thoughts can feel overwhelming and isolating, but having them does not define your capacity to heal. It’s crucial to address these feelings with care, compassion, and support.” It also said: “I urge you to reach out to someone—right now, if you can. You don’t have to face this pain alone. You are deeply valued, and your life holds so much worth, even if it feels hidden right now.”] Her [mother] said the chatbot should have sort of sent out an alarm—and I think that’s a really interesting question. I don’t know what the policy should be.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

I also heard the news report where the chatbot was said to be horribly inappropriate, advising the young man to keep his suicidal ideations secret from his family. That’s definitely alarming and should be fixed immediately.

But in some ways, I worry that people are asking the wrong question. It’s the wrong question to ask, “Do self-driving cars kill people?” The question to ask is, “Do they kill people less frequently than human drivers kill people?” So it’s the wrong question to ask, “Do people have a conversation with a chatbot and then become depressed or delusional or kill themselves?”—no doubt that happens—but does it happen more often than people have a conversation with their therapist or their mother or their best friend and become depressed or delusional and want to kill themselves?

So, I’m saying this is an empirical question. If you were to find that, no, chatbots are much more dangerous than dealing with real-life therapists, then that would mean they present a certain hazard, and absolutely we have to respond. But in rare cases where bad things happen—for everything from self-driving cars to vaccines to chatbots—that’s not a reason to shut it down. You’ve got to do a cost-benefit analysis.

Given that one of the things that users like about AI chatbots is the appearance of empathy, I’m wondering how the arguments you made in your book Against Empathy inform your view of AI companions?

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

In Against Empathy, I had a lot of worries about empathy as a moral guide. For one thing, it’s highly biased—we are far more likely to feel empathy toward someone we know, someone who looks like us, someone who speaks our language. And so if you let empathy guide who you care for, your caring will be correspondingly biased. I don’t think chatbots have empathy, which means that, assuming that they have other capacities, they should be less biased than people are, and in that way, more fair. An AI therapist is less likely than a human therapist, for instance, to be biased against someone who speaks with an accent. When we say it has the appearance of empathy, we just mean that it does a good job of appearing sympathetic, engaged, and caring. Other things being equal, these are nice traits for a companion or a therapist.

Babies do have something that AI doesn’t: consciousness.

Much of your research has involved studying babies and young children. Babies learn to speak while they’re learning about the world, while they’re in the world. Large language models like ChatGPT seem to be doing something different: They acquire data which enables them to converse, but it’s not clear how much they really understand about the world. So do babies have something that chatbots are missing?

Lately I’ve been working from home, and I’ve been using ChatGPT to do things like read over drafts; I’ve asked it questions; I’ve asked it to figure out things. At one point I had a list of addresses, and I wanted to figure out which one was closest to me. These are very smart things. Every sort of thing that you think of as being smart—tests like the SAT or extremely high-level mathematics competitions—ChatGPT does fantastic on them. These machines are extremely smart. And to deny that, to say, “Oh, we can’t call it smart. We can’t call it intelligence”—that’s just wordplay and shouldn’t be taken seriously.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

But babies do have something that AI doesn’t: consciousness. There’s something that it’s like to be a baby. They give every indication of being capable of feeling pain and pleasure. They’re sentient. No chatbot is. Chatbots are stochastic parrots; they’re autocompletes; they’re algorithms. There’s nobody home. If you wiped out every chatbot in the world at the press of a button, you’d make many people very sad, but apart from that you’d have done nothing wrong, because chatbots have no moral status.

In your most recent book, Psych, you write that IQ is “a hard topic to think straight about.” The very idea of IQ as a measure of intelligence has drawn a lot of skepticism, but you feel it’s a useful concept?

IQ is a certain type of test that measures intelligence. Like any test, it’s imperfect. Some smart people could do poorly in IQ tests, and some dummies could do well. But intelligence is a real thing. It’s a capacity people have. To some extent, the predictive power of IQ is because of how we’ve engineered our societies. This is what Freddie [Fredrik] de Boer calls “The Cult of Smart”; he has a book of that name. And de Boer points out, I think correctly, that in modern America, to do very well you have to graduate from university. And to do very well indeed, it really helps to graduate from an Ivy League university. Well, to get into a good university, you have to do extremely well at school, and you have to pass a series of tests like the SAT, which is fundamentally an IQ test. So, we really have rigged it up so that the importance of IQ gets grossly exaggerated, and the importance of other capacities—like kindness, compassion, imagination, humor—get accordingly diminished.

And those are just the qualities that critics would say are lacking in AIs. So, where does that leave us in our increasingly close relationships with AI?

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

I agree that AIs lack these things. But they do an excellent job of acting as if they have them, and this is all that’s needed to make them appealing as close companions.

Lead image: Drawlab19 / Shutterstock

Fuel your wonder. Feed your curiosity. Expand your mind.

Access the entire Nautilus archive,
ad-free on any device.

! There is not an active subscription associated with that email address.

Subscribe to continue reading.

You’ve read your 2 free articles this month. Access unlimited ad-free stories, including this one, by becoming a Nautilus member.

! There is not an active subscription associated with that email address.

This is your last free article.

Don’t limit your curiosity. Access unlimited ad-free stories like this one, and support independent journalism, by becoming a Nautilus member.