ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. or Join now .
Sign up for the free Nautilus newsletter:
science and culture for people who love beautiful writing.
NL – Article speedbump

In the late 1970s, groups of soda marketers descended on the nation’s malls. They gave shoppers two unmarked cups, one filled with Coke and one with Pepsi. Tasters were asked which they preferred. The Pepsi Challenge was a marketing gimmick, but it was based on a classic scientific tool, the blind experiment. If a person doesn’t know which experimental treatment is which, her preconceptions are less likely to affect how she interprets information. Blind experiments have been used to avoid unconscious bias for more than 200 years and are among the scientific method’s most important tools.

Nautilus Members enjoy an ad-free experience. Log in or Join now .

Yet a growing number of researchers say that scientists in many fields often fail to use blind observation, even when it would be easy to do so. Most recently, Melissa Kardish and fellow researchers at the University of Texas at Austin surveyed 492 studies in the field of ecology, evolution, and behavior. Their analysis, recently published in Frontiers in Ecology and Evolution, drew from 13 journals, ranging from heavy-hitters like Science and Nature to lower-profile publications like American Naturalist. Of the experiments that could have been influenced by confirmation bias, only a little over 13 percent reported the use of blinding. “If you ask any scientist in any field whether blinding is important, they’ll say it is,” Kardish says. “But I definitely think it’s not a current standard.”

Meat testers back in 1935 ran an early version of the Pepsi challenge.Everett Historical Collection via Shutterstock

We tend to think of blind trials in the context of medicine. A patient in a clinical trial often doesn’t know whether she is receiving a placebo or an experimental treatment. This way researchers can determine which effects are due to the chemistry of the medicine and which due to a patient’s belief that a given treatment is going to work, the so-called placebo effect. In a double-blind trial, the researcher is also kept in the dark about who receives placebos and who receives experimental treatments.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Blinding research subjects isn’t necessary in fields like ecology, where the organisms under study aren’t likely to have built-in biases. But blinding the researcher remains important, especially when she expects a particular outcome and the variables being measured are subjective. Yet it is dramatically underutilized in many branches of science. Is that ant spreading its mandibles in a show of aggression or is it just asking for food? (Social insects often feed one another, a process called trophallaxis.) If an entomologist expects the ant to be friendly—say, when interacting with another ant from its colony—she might lean toward the latter. If, on the other hand, she doesn’t know the relationship between the ants, she can assess the gesture without bias.

In many cases, blinding isn’t difficult. Researchers can, for instance, identify samples using arbitrary codes, rather than overtly labeling them as one treatment or another. In the example above, one investigator might track the identity of the ants while another scores the behavior. The latter job is that of the “naive experimenter,” since she is unaware of treatment conditions. But blinding isn’t always an option. If you need someone with specialized training to gather data, for example, a naive experimenter might be impractical. Blinding in remote field locations can be logistically impossible, and in some cases, the variable being tested is hard to disguise. (If the naïve experimenter sees, for instance, that an experiment is meant to test whether a pollinator visits red or blue flowers more frequently, she might unconsciously form her own hypothesis, thus introducing potential bias.) Still, even if you exclude such exceptions from Kardish’s analysis, more than three quarters of the remaining studies in which bias could have been a problem don’t mention blinding.

Could it be that some researchers are blinding their studies and failing to report it? Kardish thinks that could be happening in some cases. She cites the word-count limitations journals impose as one reason authors may not report blinding. 

Some other researchers say that’s an unlikely explanation for why so many studies fail to mention blinding. Luke Holman, an evolutionary biologist at Australian National University who has also studied blinding, says that “researchers have an incentive to declare that they worked blind, as their study is then more likely to be favorably received by journal editors and readers.” A forthcoming paper of Holman’s in PLOS Biology documents how infrequent blinding is across the life sciences. “Medicine is best,” he notes, “but still rather bad.”

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

“When you’re publishing something in a peer-reviewed journal, the standard should be that you have to blind your study or state why you can’t.”

The lack of blinding has led to some troubling patterns. A 2013 analysis of studies on nestmate recognition in ants found that blinded studies reported a much higher level of aggression among nestmates than those that didn’t report blinding. (Ant family life is apparently not as congenial as one might expect.) And effect sizes—how much an experimental treatment affected the variable being measured—were much lower in blind experiments. In other words, when researchers expected a certain outcome and failed to safeguard against confirmation bias, their expectations seem to have colored the results. The nestmate analysis concluded that the problems introduced by bias weren’t enough to invalidate most studies, but Holman says that’s not always the case. “Observer bias can have a massive effect, often bigger than the ‘real’ difference being studied,” he says. “There is very strong evidence that non-blind studies get the results the authors expected to find more often than blind studies do.”

We’ve known for a long time that a lack of blinding can skew an experiment’s results. Researchers have been studying the effects of doing without it since at least the 1960s. And as Philip Ball reported in a recent piece in Nautilus, science is full of “distorting influences,” of which observer bias is just one. But as Ball notes, a movement towards transparency and accountability seems to be afoot. An increased focus on blinding is one part of the drive to clean house within science. “It seems to be becoming a hot topic,” says Michael Ritchie, editor of the Journal of Evolutionary Biology, one of those included in Kardish’s study.

The onus to fix the problem, many researchers say, lies with the journals. “When you’re publishing something in a peer-reviewed journal,” says Kardish, “the standard should be that you have to blind your study or state why you can’t blind your study.”

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

But journal editors say the problem isn’t so easy to fix from their end. Michelle Scott, executive editor of Animal Behavior, another journal in Kardish’s study, agrees that while she thinks studies with a lack of blinding have “a serious potential for bias,” she doesn’t want to lose good submissions by requiring it across the board. “The issue is how to make more research adhere to best methods,” she says. The next Animal Behavior newsletter will include an article on Kardish’s study to help “publicize the problem,” says Scott.

Ritchie says the topic of blinding is on the agenda for the next Journal of Evolutionary Biology editorial board meeting in August. Though he has reservations about making a hard-and-fast rule, he says, “I expect we will amend the instructions to authors.”

One journal possibly down, many thousands left to go. In the meantime, how is one to judge the results of non-blinded studies? With caution, says Holman. “Personally, I think a great many individual research results are suspect,” he says. Still, he adds, “all in all we usually converge towards the truth.”

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Andrea Appleton is a freelance journalist based in Baltimore. Her work has appeared in Aeon, Al Jazeera America, and High Country News, among other places. Visit her website or follow her on Twitter

close-icon Enjoy unlimited Nautilus articles, ad-free, for less than $5/month. Join now

! There is not an active subscription associated with that email address.

Subscribe to continue reading.

You’ve read your 2 free articles this month. Access unlimited ad-free stories, including this one, by becoming a Nautilus member.

! There is not an active subscription associated with that email address.

This is your last free article.

Don’t limit your curiosity. Access unlimited ad-free stories like this one, and support independent journalism, by becoming a Nautilus member.