Nautilus Members enjoy an ad-free experience. or Join now .

This sort of research can, piece by piece, help reshape the online landscape so it isn’t quite so tribal and awash in misinformation and vitriol.Photograph by Prostock-studio / Shutterstock

Giant tech companies and governments largely determine what content is and isn’t allowed online, and their decisions impact billions of people: 55 percent of internet users worldwide open either social media or search engines to read or watch news.1 YouTube and its parent company, Google, consistently attract controversy for the decisions they make about which sorts of content will be allowed (and allowed to generate revenue) on the world’s most popular video site, while China’s great firewall prevents residents of the most populous country in the world from viewing material deemed threatening by the Chinese Communist Party. Because these are such powerful institutions, their decisions, and decision-making processes, have generated an understandable amount of attention—it’s no mystery why the major platforms’ fights against COVID-19 misinformation have sparked such clamorous discussion.2 

Nautilus Members enjoy an ad-free experience. Log in or Join now .

What sometimes gets obscured is the fact that many online-censorship decisions are made not by powerful actors imposing their will on average internet denizens, but by an army of users who have, in effect, been deputized as censors. Take Reddit, an online discussion community consisting of countless “subreddits” devoted to subjects ranging from politics to, um, less savory fare. The company recently made headlines when it banned hundreds of its subreddits, including “r/The_Donald,” for violations of its content policy concerning things like hate speech.3 Stories like this, about top-down censorship, can overshadow the fact that volunteer moderators on subreddits have near-unlimited power to set and enforce community discourse boundaries. Likewise, on Twitter, the microblogging platform where many journalists waste a lot of their time, what (and who) gets banned by the platform is largely determined by what users decide to “report” as offensive or misleading. This massive, mostly anonymous and pseudonymous group of internet culture cops is doing a large and likely growing share of the daily work of content-policing.

How do individual internet users in positions of (relative) power determine what content to censor?

Nautilus Members enjoy an ad-free experience. Log in or Join now .

Of course there is a difference between being censored by a government and being censored by a moderator. But in many ways, the results are the same: Censorship of any form can funnel people toward questionable, fringe-y sources of information. Especially given how politically polarized things feel at the moment, due to the power of partisanship as a social identity, these decisions matter a great deal.4 All of which raises an obvious question: How do individual internet users in positions of (relative) power determine what content to censor? Studies on this topic have mostly focused on attitudes toward censorship. But a new paper, “Censoring Political Opposition Online: Who Does It and Why,” now in press at the Journal of Experimental Social Psychology, focuses on behavior instead.

The authors, led by Ashwini Ashokkumar, a graduate student in social psychology at the University of Texas, ran three studies in which they asked subjects, namely Amazon Mechanical Turk workers, not to provide their attitudes about censorship, but to make actual calls about whether or not to vote to ban specific comments written by the researchers and attributed to online personas. After telling the study subjects about a (fictional) new blog that had been launched to spur discussion, the researchers “informed participants that [they] had received complaints regarding a surge in inappropriate comments posted on the blog and that we needed their help in deleting inappropriate comments.” It would be their job to sort the appropriate from inappropriate comments, without much more guidance than that. 

The researchers also asked participants various questions about their political beliefs, and administered them a short scale seeking to measure their level of “identity fusion.” 5 This refers to the extent to which one’s identity is “fused” with political causes, as rated by agreement with statements like, “I am one with the pro-life/pro-choice position.” “For people who are strongly fused with a cause, threats to the cause will feel like threats to the self,” hypothesized the authors. “Therefore, we expect that strongly fused individuals would be especially apt to selectively censor incongruent content to preserve their cause against challenges.” 

The experimenters manipulated the level of comment offensiveness to determine whether and to what extent that had an impact on censorship decisions as well. “We must defend the right to keep and bear arms through communication and coordinated action, retarded dumbasses like you just don’t get it,” was considered to be an offensive comment, for example. Its more-“PG” counterpart was: “We must defend the inherent right to keep and bear arms through communication and coordinated action.”

Nautilus Members enjoy an ad-free experience. Log in or Join now .

Overall, the researchers found that identity fusion was, as they predicted, a very important variable correlated with users’ likelihood to censor. Strongly fused participants censored almost 30 percent of comments they came across that conflicted with their political views. They censored only about 16 percent of the comments they found more politically agreeable. Weakly fused participants, by contrast, didn’t censor comments conflicting with their views any more often than those that didn’t conflict, censoring about 20 percent of comments in both cases. Fused individuals, in other words, were much more likely to censor in an ideologically biased way than unfused ones.

This effect “was driven by their intolerance for incongruent comments rather than an elevated affinity for congruent comments,” the researchers wrote. Interestingly, “fusion’s effect on selective censoring occurred regardless of whether the incongruent comments used offensive language.” Fused people, in other words, appeared to be censoring statements largely because of their substance, not for the offensive way they were expressed. 

Similar results popped up throughout the study. “Strongly fused participants,” the researchers noted, deleted about 13-18 percent more comments that didn’t align with their identity, as opposed to ones that did. More weakly fused participants were, perhaps unsurprisingly, much less biased, deleting about 0 to 9 percent of comments that didn’t jive with their views. All of which offers some evidence—circumstantial, at least—that to strongly fused individuals, there is something viscerally threatening about being exposed to opposing political ideas.

This finding seems to sit nicely with the more socially oriented idea of “cultural cognition,” touted by Yale law and psychology scholar Dan Kahan. Kahan’s basic thesis is that the more a given belief ties into our religious and political values, the less likely it is we will be swayed from it.6 That’s largely because our religious and political values involve the groups that matter the most to us. For a conservative climate-change-denier to decide that actually, anthropogenic climate change is a serious threat (to take one example), might involve not only updating his own beliefs but potentially threatening his relationship to his family or church or whatever other groups he happens to be a part of. In this view, group ties arguably anchor us to our beliefs more strongly than evidence itself.

Nautilus Members enjoy an ad-free experience. Log in or Join now .

It would be interesting to see whether and to what extent social connections could be another mediating factor when it comes to online censorship. How would the results of this study be different if the researchers were able to ask participants to rate their (dis)agreement with survey items like, “My own beliefs on abortion are also held strongly by those closest to me”? 

Identity fusion could be one of the things sweeping people who already have fairly strong views further and further from potentially disconfirming evidence. This could exacerbate the problem Eli Pariser pointed out in The Filter Bubble years ago—that is, the tendency of the modern internet to shunt users into echo chambers where they are shielded from contrary views. If, as seems logical, highly fused individuals seek out communities of politically like-minded peers, and these communities are more likely to be moderated by highly fused individuals, then it stands to reason that contrary evidence will be knocked down with particular vigilance in these places, further cutting off members from the possibility of changing their minds.

That said, all this is speculation—researchers are really only taking their first baby steps toward understanding these dynamics. But by better adapting what we already know about human nature and behavior to the study of the internet, this sort of research can, piece by piece, help reshape the online landscape so it isn’t quite so tribal and awash in misinformation and vitriol. Take, for example, a recent paper that makes the case that the behavioral sciences can “promote truth, autonomy and democratic discourse online.” 7 The researchers—experts in sociophysics, cognitive science, and law—argue that “effective web governance informed by behavioral research is critically needed to empower individuals online.” They go on:

Although large social media platforms routinely aggregate information that would foster a realistic assessment of societal attitudes, they currently do not provide a well-calibrated impression of the degree of public consensus. Instead, they show reactions from others as asymmetrically positive—there typically is no “dislike” button—or biased toward narrow groups or highly active users to maximize user engagement. This need not be the case.

Nautilus Members enjoy an ad-free experience. Log in or Join now .

But how to fix it? That’s the key question, and it’s comforting to know that researchers are making progress.

Jesse Singal is a contributing writer at New York Magazine and cohost of the podcast Blocked and Reported. His first book, The Quick Fix: Why Fad Psychology Can’t Cure Our Social Ills, will be out next year.

References

1. Newman, N., et al. Reuters Institute Digital News Report 2019. Reuters Institute for the Study of Journalism 2019 1–156 (2019)

Nautilus Members enjoy an ad-free experience. Log in or Join now .

2. Luthra, S. How Mis- And Disinformation Campaigns Online Kneecap Coronavirus Response. Khn.org (2020)

3. Ingram, D., & Collins, B. Reddit bans hundreds of subreddits for hate speech including Trump community. nbcnews.com (2020)

4. Iyengar, S., Lelkes, Y., Levendusky, M., Malhotra, N., & Westwood, S. The Origins and Consequences of Affective Polarization in the United States. Annual Review of Political Science 22 129–146 (2019)

5.Gómez, Á, et al. On the nature of identity fusion: Insights into the construct and a new measure. Journal of Personality and Social Psychology 100 918–933 (2011)

Nautilus Members enjoy an ad-free experience. Log in or Join now .

6. Kahan, D. Ideology, motivated reasoning, and cognitive reflection. Judgement and Decision Making 8 407–424 (2013)

7. Lorenz-Spreen, P., Lewandowsky, S., Sunstein, C., & Hertwig, R. How behavioural sciences can promote truth, autonomy and democratic discourse online. Nature Human Behavior (2020)

close-icon Enjoy unlimited Nautilus articles, ad-free, for as little as $4.92/month. Join now

! There is not an active subscription associated with that email address.

Join to continue reading.

Access unlimited ad-free articles, including this one, by becoming a Nautilus member. Enjoy bonus content, exclusive products and events, and more — all while supporting independent journalism.