Last week, Sam Altman, the CEO of OpenAI, tweaked ChatGPT to make it act more like a “friend” again. The company had briefly tuned the dials to make its popular AI chatbot less “effusively agreeable,” after it guided a teenager named Adam Raine, who had become very attached to it, to take his own life. But users revolted when Open AI made the change, complaining ChatGPT now sounded like a robot, so Altman changed it back. “If you want your ChatGPT to respond in a very human-like way, or use a ton of emoji, or act like a friend, ChatGPT should do it,” Altman wrote on X.
Lonely people everywhere are increasingly turning to AI chatbots like ChatGPT and Claude for friendship and psychological support. We are, after all, in the midst of a loneliness epidemic, and unlike humans, chatbots have an infinite amount of time to listen. But one of the pillars of friendship is empathy, the ability to share and understand the feelings of another person. Can a virtual machine living in the cloud serve up real empathy?
The answer to that question is complicated, says empathy researcher Anat Perry of the Hebrew University of Jerusalem. She spoke on a panel about human-AI relationships at a conference on minds, artificial intelligence, and ethics hosted by the Dalai Lama Library in Dharmasala, India last week. “When it says it feels your pain or it shares your experience, it’s just faking it,” explained Perry. Chatbots can express cognitive empathy, taking another person’s perspective, and motivational empathy, signaling that they want to alleviate the listener’s pain, she said. But they can’t offer affective empathy, the actual sharing of another person’s joy or pain, which comes from real-life experience.
Perry suspected that most humans already understand this and value the empathic support of a human more than that of a chatbot. To test her hunch, she ran an experiment in which she tricked her subjects. Perry and her colleagues asked 1,000 people recruited online to share a recent emotional experience. Half of the group was told they would get a response from ChatGPT and the other half from a human. In fact, all of the responses were AI-generated—but prompted to be highly empathic. When they rated the responses, people said they felt more positive emotions, and fewer negative ones, when they perceived the responder to be a human.
A second experiment showed that 40 percent of people were willing to wait up to two years for a response to an emotional experience from a human instead of getting an immediate response from a chatbot. Those who chose a human said, “they wanted someone who could truly understand them, share some of their emotions, care for them, and maybe even alleviate their loneliness.”
But that still leaves the other 60 percent, who were more interested in hearing from a chatbot right away. It’s a potentially concerning finding. While Claude, ChatGPT, and other chatbots might offer a temporary bandaid for humanity’s loneliness crisis, the more we turn to the machines, the less time we will have for each other. Ultimately, we may all realize there is no actual shoulder to lean on, no hand to wipe away the tears. We will have tumbled into a hall of machine-held mirrors.
Lead image: Vector Mine / Shutterstock