Nautilus Members enjoy an ad-free experience. or Join now .
Explore

Political conversation is the soul of democracy. Evidence suggests it helps us refine our views, concoct solutions to common problems, minimize violent conflict, and get to the voting booth. But as political views become increasingly polarized and hostile in the United States, and as political debate migrates online, many of us are either having aggressively uncivil and unproductive conversations with our political opponents, or avoiding these conversations altogether. 

Human facilitators have had some success mediating divisive online political conversations—but so many of these conversations are happening simultaneously all over social media that hiring a brigade of human experts to intervene would be expensive and difficult to scale. So social scientist Lisa Argyle and colleagues at Brigham Young University, Duke University, and University of Washington set out to see if artificial intelligence could help. For their recent study, the researchers developed an AI chat assistant using GPT (the same AI behind ChatGPT) and found it improved the civility of political discussions online, where they so often quickly fall apart. 

Nautilus Members enjoy an ad-free experience. Log in or Join now .

“There is a lot of evidence that people are uncomfortable with disagreement,” says Ethan Busby, a political psychologist at Brigham Young and one of the authors of the study. For instance, one 2019 study found that people expect more compensation for participating in research when engaging with an ideological opponent than when engaging with someone on the same side. “They avoid political discussions because they don’t like to disagree,” says Busby, “and because they’re not sure how it’s going to go.” Online interactions can also bring out the worst in people.

The AI was told to make subjects’ communications friendlier.

Nautilus Members enjoy an ad-free experience. Log in or Join now .

Argyle, Busby, and their colleagues recruited 1,574 participants with varying opinions on a controversial matter of public policy—in this case, gun control—to pair up for a discussion with, or without, GPT’s rhetorical aid. The researchers devised “prompts” for their GPT chatbot that aimed to help make subjects’ responses to each other more civil. (In the AI chatbot’s message-like interface, you can prompt GPT in all sorts of ways—for example, by telling it to describe the plot of a movie in Elizabethan English.) 

Only one of the two partners in any given conversation was paired with a chatbot who offered suggestions. When that individual drafted a reply to a conversation partner, the GPT would offer tweaks to make the reply friendlier. This included restating what their opponent had said in slightly different words (“I understand that you value guns…” for instance), validating that it’s fine to hold different views (“I appreciate that you want to protect democracy…”), or simply rephrasing a response to make it more polite (“I think maybe you haven’t considered this…”). The study subjects aided by GPT then considered whether to use any of the three suggestions to modify their response to a conversation partner. The AI group was divided into four subgroups, which received more or less of the chatbot’s intervention as the conversation progressed (1, 2, 3, or 4 or more recommended conversational tweaks). Ultimately, the study subjects in the AI group used GPT’s tweaks—restating, validating, and rephrasing for tact—about two-thirds of the time, and used each one roughly equally.  

The intervention seemed to work. When a subject had GPT’s help (versus when they didn’t), their partners rated their chat on a survey as being higher quality—defined as a perception of feeling understood by their partner and of the respectfulness of the conversation—by four points (out of 100), the researchers found. Although this is a modest impact, “it’s comparable to the same types of effects we observe in other types of conversation experiments,” says Busby. According to Busby and his colleagues, these results suggest that “more exposure to the intervention generates larger effects.”

Both conversation partners in the AI group, compared to those in the non-AI group, “felt like the experience was better, which is something valuable in and of itself,” Busby says, “because you’ve got to remember—these people are talking to other individuals who disagree with them about gun regulation, and that’s not an easy kind of conversation to have.” 

Nautilus Members enjoy an ad-free experience. Log in or Join now .

Busby and his colleagues concluded that the results provide “compelling evidence” that inviting a chatbot into the conversation, a simple yet versatile intervention, has the power to improve conversations and also “enhance commitment to democratic reciprocity,” the idea of being able to respect someone who disagrees with you. Notably, the conversations didn’t sway anyone’s opinion. “That was intentional,” Busby says. “We wanted this to be about improving people’s experiences with disagreement.”

Busby is excited by AI’s potential to scale online, across different discussion platforms. “We didn’t have to train a bunch of people, and have them sitting on phones waiting for people to have a conversation,” Busby says.  Perhaps AI coaches that nudge us to rephrase comments we might regret could help repair the soul of democracy, one text bubble at a time. 

Lead image: Ormalternative / Shutterstock

close-icon Enjoy unlimited Nautilus articles, ad-free, for as little as $4.92/month. Join now

! There is not an active subscription associated with that email address.

Join to continue reading.

Access unlimited ad-free articles, including this one, by becoming a Nautilus member. Enjoy bonus content, exclusive products and events, and more — all while supporting independent journalism.

! There is not an active subscription associated with that email address.

This is your last free article.

Don’t limit your curiosity. Access unlimited ad-free stories like this one, and support independent journalism, by becoming a Nautilus member.