ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. or Join now .
Sign up for the free Nautilus newsletter:
science and culture for people who love beautiful writing.
NL – Article speedbump

In the interplay between behavioral science and policy, puffs of smoke abound.Photo illustration by metamorworks / Shutterstock

Nautilus Members enjoy an ad-free experience. Log in or Join now .

In March the United Kingdom curiously declined to impose significant social distancing measures in response to the global pandemic. The government was taking advice from several parties, among them the so-called “Nudge Unit,” a private company called Behavioral Insights Team, which uses behavioral science to advise U.K. policymakers on how to “nudge” people toward certain actions. Chris Whitty, England’s Chief Medical Officer, reportedly cited “behavioral fatigue,” the idea that the public’s commitment to the measures would fade over time, as a policy concern in government meetings. This shaped the country’s pandemic response despite challenge on the idea from the Nudge Unit, led by experimental psychologist David Halpern.* The lax measures sparked fierce backlash not just from epidemiologists concerned about the virus’ spread, but also from a group of 600 behavioral scientists—psychologists, sociologists, economists, political scientists, and more. They signed an open letter doubting the quality of the evidence that led to the government’s decision.

To the government’s credit, there is some evidence for behavioral fatigue, but probably not enough for it to form the foundation of a country’s response to a deadly pandemic. As Anne-Lise Sibony, a researcher who studies the relationship between law and behavioral science, wrote in the European Journal of Risk Regulation, “[I]t is not clear why behavioral fatigue was singled out given that other, better-documented behavioral phenomena might—with equally unknown probability and distribution—be at work and either fuel or counteract it.”

The U.K. eventually bowed to the pressure and ramped up its efforts to slow the virus’ spread by banning mass gatherings, requiring 14 days of self-isolation for anyone with COVID-19 symptoms, and encouraging people to avoid non-essential travel and contact with others. But the debate about how and when behavioral science should shape public policy rages on.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

“The reality is this multimillion, maybe billion, dollar industry has gotten way far ahead of the evidence.”

The lack of a vaccine means our best countermeasure against the pandemic is to change our behavior. To that end, a group of behavioral scientists, led by psychologists Jay Van Bavel and Robb Willer, published a paper in Nature Human Behavior in April on how social and behavioral science could support the response to the pandemic. It highlights research on topics like science communication, moral decision-making, and stress and coping. The goal of the paper, the researchers wrote, is to “help align human behavior with the recommendations of epidemiologists and public health experts.” For example, the authors point to studies that have shown that emphasizing a shared social identity can help groups of people respond to threats and can encourage adherence to social norms. With this in mind, they suggest that it may be helpful for public health officials to spread messages that give people a sense of connection to their local community or their fellow citizens.

If insights like these make people a little more likely to take the recommended precautions, it could mean the difference between life and death. So why shouldn’t we listen to behavioral scientists? As the economist John Maurice Clark once remarked, if a policymaker doesn’t take psychology into account, “he will not thereby avoid psychology. Rather, he will force himself to make his own, and it will be bad psychology.”

The flipside to this, of course, is when bad psychology comes from scientists. “If we’re overconfident in studies that don’t replicate,” psychologist Hans IJzerman told Nautilus in an email, “then we’re also establishing our own psychology.” Using evidence before it’s ready for primetime may not be better than nothing—it could be a waste of resources, or even actively harmful to those it’s intended to help. Concerns about behavioral fatigue, for example, were meant to protect the UK public, but they ended up indirectly facilitating the virus’ spread by delaying social distancing measures.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Behavioral science—and psychology in particular—has had a long and well-publicized struggle with quality control. Many influential experiments have failed to hold up after further scrutiny, often due to small and non-representative samples, sloppy data analysis, and highly context-specific findings. This has exposed systemic flaws in how behavioral science is conducted and interpreted—making it shaky ground for any public policy. “As someone who has been doing research for nearly 20 years,” wrote Michael Inzlicht, a social psychologist who studies self-control, “I now can’t help but wonder if the topics I chose to study are in fact real and robust. Have I been chasing puffs of smoke for all these years?”

Psychology and other fields are making progress in addressing their flaws, but it remains true that in the interplay between behavioral science and policy, puffs of smoke abound. For example, in the wake of worldwide protests against racist policing, there’s renewed interest in using science to change the behavior of police officers. For years, implicit bias training—classes and workshops designed to help participants recognize and counteract their own discriminatory thoughts and feelings—has been touted as the answer, not just for police departments but for white-collar office spaces and many other kinds of professional environments. The problem, though, is that it doesn’t seem to work, at least in its current form. A 2019 meta-analysis found that, while certain interventions can reduce measures of implicit bias, they don’t do much to change people’s behavior. “The reality is this multimillion, maybe billion, dollar industry has gotten way far ahead of the evidence,” said Patricia Devine, who runs a lab studying prejudice, on Marketplace Morning Report.

Another example of behavioral science-based policy gone awry is what some education researchers call the “education hype cycle,” wherein “promising ideas that produce positive results in experiments get over-simplified and touted as ‘the answer’,” wrote psychologist David Yeager. “Then educators or policymakers apply them indiscriminately, as if they’re Jack’s magic beans that boost students up no matter where they’re planted.” Take the idea of “learning styles”: Many educators have been encouraged to identify their students as either visual, auditory, or kinaesthetic learners and adapt their teaching styles accordingly—but the concept is bunk.

Deciding whether to base policy on behavioral science comes down to a tricky balance between the pros and cons of acting on imperfect evidence. One pro is obvious: the potential for policy that neatly complements the many quirks of human behavior, like interventions developed by the Nudge Unit that increased rates of tax payment and organ donation, or the use of carefully designed posters to improve hand hygiene among healthcare workers. But many researchers still prefer to err on the side of caution.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

In a preprint responding to Van Bavel and Willer’s paper, IJzerman and his colleagues called for more humility and restraint among behavioral scientists. They proposed a system they call “evidence readiness levels,” which they describe as “guidelines for flagging trustworthy and actionable research findings.” Based on a similar system that NASA uses to assess its technology, evidence readiness levels range from preliminary observations, at level 1, to field-tested solutions that are ready to deploy in a crisis, at level 9.

One can imagine the evidence-readiness levels framework being really useful for, say, preventing another education hype cycle or another infusion of public funds into ineffective implicit bias trainings. But what about during a pandemic, when public health officials are compelled to try to change people’s behavior, with or without input from behavioral science?

“I’m not sure [rocket science] is always a good comparator for behavioral science, even for behavioral science deployed during the pandemic,” bioethicist and behavioral scientist Michelle Meyer wrote, in an email, to Nautilus. “It’s not clear to me that we need to go to the moon, but we do need to communicate public health messages to people about how to protect themselves during the pandemic. Conditional on that messaging happening anyway, why not draw on insights from behavioral science, develop a few different messages, and test them to see which is most effective?”

Other evidence-evaluation frameworks have been proposed, but no matter which approach behavioral scientists take, it will have to involve an answer to the same difficult question: What level of uncertainty is acceptable? Even the most robust, well-replicated behavioral interventions involve some level of imperfection. So until behavioral scientists come to an agreement about how big the gray areas can be, public health officials, educators, and all others who seek insights from behavioral science may just have to decide for themselves.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Scott Koenig is a doctoral student in neuroscience at CUNY, where he studies morality, emotion, and psychopathy. Follow him on Twitter @scotttkoenig.

WATCH: Why psychologists often view their research as “me-search.”

* An earlier version of this post incorrectly attributed the concern over “behavioral fatigue” to the Nudge Unit. We regret the error.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .
close-icon Enjoy unlimited Nautilus articles, ad-free, for less than $5/month. Join now

! There is not an active subscription associated with that email address.

Subscribe to continue reading.

You’ve read your 2 free articles this month. Access unlimited ad-free stories, including this one, by becoming a Nautilus member.

! There is not an active subscription associated with that email address.

This is your last free article.

Don’t limit your curiosity. Access unlimited ad-free stories like this one, and support independent journalism, by becoming a Nautilus member.