In the 1990s, the block I lived on in New York City was chaotic and seedy. From my window, I’d witnessed many drug deals, one stabbing, and the aftermath of one shooting. The mayhem escalated dramatically on the Fourth of July, when it was a good idea to stay somewhere else for the night. But one sweltering Fourth, my travel plans fell through.
I’m gazing out my second-floor window when two young men on the street below start throwing cherry-bomb firecrackers into the open window of the apartment directly across the street. After a few minutes of that, one of them leaps up, grabs the fire escape ladder and climbs up and through the open window into the apartment. His friend follows him.
It was more than the usual unruliness, so I call 911 and report what’s happening and the address of the building across the street. The dispatcher wants my address, so I provide that too.
A few minutes later, a squad car roars up. But the officers are confused—they’re trying to get into my building and not the one across the street. I hear the buzzer sound and the exterior building door swing open, then two cops charging up the stairs to my landing. I want to tell them they’re in the wrong building, so I go to the door and unlatch the lock. I push open the door and see two pistols pointed right at my face from just a few feet away, the two cops braced to fire. They immediately drop their hands.
Two things about that moment have stayed with me over the years.
One is how freaked out they were. My flashbulb memory of that moment is two gun muzzles, two terrified faces.
The other is how fast they reacted. It had nothing to do with how I moved or what I said or what was or wasn’t in my hand; it was instantaneous, like a reflex. “Across the street!” I hissed, and they thundered back down the stairs and across the street. (The firecracker youths were long gone.)
The fact that those officers didn’t blow my head off wasn’t luck. It was implicit bias in action—one of the lightning-fast automatic cognitive processes that shape much of human decision-making. The instant the officers saw me, a white female, they made the split-second decision that I was no threat.
In this case, they were right, but too often it plays the other way: An officer’s implicit bias categorizes an innocent black man as dangerous, simply because of who he is, and the man is shot or killed. If I had been a black man and did the same dumb, innocent thing—opened my door in the face of a cop who thought he was in a dangerous situation—it might have been the end of me.
Implicit bias is a product of the fact that people constantly make non-conscious judgments about other people that rely on stereotypes and other one-size-fits-all formulas. These stereotypes are a shortcut, a way to quickly predict what other people are about to do; they’re fast, but often wrong. They have a particularly powerful influence over cognition during times of stress and time pressure, such as when police officers are afraid of being attacked.
Since the 1990s, social psychologists have amassed a wealth of evidence demonstrating the power and pervasiveness of these biases. The best-known laboratory measure, the Implicit Association Test (IAT), you can try for yourself. Its many variations are designed to measure the strength of non-conscious associations that you likely have but are unaware of; the best-known version reveals implicit racial bias against African Americans. When asked to pair pleasant words and white faces, white people respond just slightly faster than when asked to pair pleasant words and black faces. Asian Americans respond similarly to whites, and about half of African Americans also show a moderate bias toward whites. The test is thought to reflect an automatic cognitive preference toward whites.
These automatic judgments may even contradict conscious beliefs. A person may be consciously anti-racist but still unconsciously respond to someone of another race in an unfair or prejudicial way. It’s the consequence of living in a culture pervaded with biases, say social psychologists—we absorb and internalize stereotypes without knowing it. “We’re living in a sea of stereotypes that soak into our brains,” says University of Washington psychologist Anthony Greenwald, who along with colleagues created the IAT. “The associations that make up implicit biases are acquired over a lifetime.”
Implicit bias is now a buzzword in law enforcement and corporate human relations. The U.S. Department of Justice points to implicit bias as a major reason why law enforcement tends to disproportionally focus on African Americans and Latinos as suspects. It even came up as a question during the first presidential debate in September. So police departments and major companies like Facebook and Coca-Cola proudly describe the initiatives they’ve launched and the workshops and training sessions they’re adopting in order to “de-bias” their employees. But there’s a problem: There’s no good proof that implicit bias can be permanently unlearned, and little evidence about the best way to unlearn it.
A person with a conscious, explicit prejudice can sometimes be convinced to change opinions. But persuasion doesn’t work very well against non-conscious stereotypes, because people often don’t know they have them. Simply informing someone about implicit bias isn’t enough. And just instructing someone to suppress their responses doesn’t work; they’re too ingrained. “We’ve learned there’s no easy method of unlearning,” says Greenwald. “It’s not that it’s impossible, but nobody has shown how to do it.”
More than 600 studies have explored various ways to reduce implicit bias. This body of research suggests that biases may be malleable. But there’s no consensus about what techniques are most effective and durable, and whether they translate to the real world.
In 2010, social psychologists set up a contest, asking colleagues to submit their best ideas for de-biasing. The team of 22 researchers tested the 17 best methods, such as boosting attitudes toward African Americans, encouraging negative impressions of whites, fostering identification with African Americans, instructing participants to think about the world from a different perspective, or strengthening egalitarian feelings.
More than 17,000 people participated via an online portal. Only eight techniques reduced implicit bias, as measured by a modest change in response times on the IAT. The most successful methods linked African Americans with positive stories or images or whites with negative ones, and they tended to be vivid and highly personal. A followup paper in 2016 found that even for the techniques that worked, none of the effects lasted even a few days. “Both of these findings were incredibly surprising and depressing,” says lead author Calvin Lai, a social psychologist and postdoctoral fellow at the department of psychology at Harvard University. “It was a complete shock to me that none of them had any kind of durability.”
Given results like these, academic researchers say that not enough is known about bias to design effective interventions. “If there was a clear, easily implemented solution, anyone could do it,” says Gordon Moskowitz, a social psychologist at Lehigh University who studies non-conscious cognition. “But there isn’t.” There’s even some evidence that well-meaning but badly designed efforts to create awareness can backfire, fostering resentment.
Nonetheless, diversity consultants are already conducting anti-bias trainings throughout the nation’s police departments and in major corporations. The DOJ announced in June that its more than 23,000 law-enforcement agents and 5,800 attorneys would be trained to recognize and address implicit bias. Classes usually offer one to six hours of education that generally include education and skills training.
“We’re not helping them unlearn their biases in the time they’re with us,” says Lorie Fridell, an associate professor of criminology at the University of South Florida and CEO of Fair and Impartial Policing, a consultancy working with the DOJ on its initiative. “There’s nothing that’s going to undo the biases that took them a lifetime to develop. We teach tactics and tools that over the long term can reduce biases.” Business is brisk, says Fridell, with the company holding several trainings each week
Fridell says the FAIP program is developed based on empirical research. She offers the example of the contact hypothesis, which predicts that biases will decline if members of different social groups interact in a positive context. Some research indicates that even imagining such intergroup contact can reduce anxiety and improve attitudes, although in Lai’s experiment, an intervention trying to trigger this effect did not lead to any changes in participants’ IAT results.
Fridell says she’s not surprised by that result: “You don’t expect that to happen in a lab, in a couple hours.” Instead, she says, their classes for recruits and patrol officers involve six hours of training that introduces the idea of implicit biases, increases officers’ motivation to reduce bias, and, says Fridell, stop it from being activated. FAIP has not formally evaluated the impact of its training; Fridell says she’s looking for grant support for such a program.
The programs taught by the consultancy Cook Ross are slightly different, aimed at corporations, large nonprofits, and government. The goal, says founding partner Howard Ross, is to provide education about how the mind works and how decision-making happens, and to teach interventions that can help people slow down or mitigate their reactions. “What we’re trying to do is not debias people or make bias go away,” says Ross. “I’m pretty clear, based on the research, it’s difficult if not impossible to do that. What we can do is get people to understand that they can learn to observe their biases and disengage them at times.”
Techniques include conscious breathing exercises that encourage people to use deliberative thought processes rather than automatic or emotional responses and priming tools that can encourage people to practice the skills they learn over time. The program provides people with two-minute refresher videos to be played on cellphones, and suggests clients review lessons about biases regularly, or prior to important meetings or personnel decisions.
“Do I think that has a 100 percent impact? No of course not,” he says. “But it can be a big impact.” The consultancy does not collect data to evaluate its projects, but some of its clients do, says Ross. According to him, two clients, both unnamed, have successfully increased the proportion of women who are hired or promoted.
Given the uncertainties and the possibility of backlash, social psychologists are generally skeptical about for-profit trainings. “This is a solvable problem, but it’s not solvable by the approaches that naïve people who don’t really understand the situation psychologically are trying,” says Greenwald.
In Moskowitz’s view, de-biasing programs should only be created by experts in psychology; several academics have created such programs, including researchers affiliated with Project Implicit, but these academic-run programs can’t begin to address the demand. Furthermore, any project to teach unbiasing techniques should collect data, Moskowitz says. “If you’re not doing that, you’re just taking a swing, and you have no sense if it’s working or not,” he says. “The stage we’re at is translating nuanced lab work into field settings, and it requires some expertise. But some people see it as a money grab.”
In regards to officer training, says Lai, “I worry that it might not be effective. But a lack of evidence doesn’t mean it’s ineffective, just that it hasn’t been studied.”
Lai, Greenwald, Moskowitz, and others don’t think bias is intractable, and they’re continuing to test methods that might make a difference in the real world. More intensive or longer-lasting programs might be more successful; targeting children is possibly more effective. Moskowitz suspects that tapping into motivations and goals is key. Lai, research director for Project Implicit, says that his group is “back at the drawing board” and looking at approaches adopted from research on childhood education, among other sources. Moskowitz and his research partner Jeff Stone of the University of Arizona are testing an intervention at the university’s medical school, gathering data over time to see if trainings with first-year students last over the long haul.
In the meantime, what does seem to be effective are hacks and workarounds that prevent the bias from being triggered in the first place. The most famous example is the blind audition, in which symphony orchestras evaluate applicants from behind a screen, so that gender does not enter into decision-making. This method increased women’s chances of moving from the first to second round of auditions by half.
Along these lines, some companies have adopted blind reviewing, taking the names off of job applicants’ resumes. Others use structured interviews with a consistent set of questions, to make the hiring process more data-driven and less prone to subjective impressions. Cook Ross includes recommendations for these structural tweaks to help companies perform performance reviews and interviews more equitably. Police departments can introduce rule-based criteria to guide traffic stops or arrests that might reduce the impact of bias.
These workarounds are designed for controlled environments like an office, or perhaps even a routine traffic stop. It’s hard to imagine how they’d help in a darkened hallway, when a stressed-out cop hears a door unlatch and draws his gun. In fact, it’s difficult to imagine anything at all that could overcome the fear and adrenaline of that moment. But the researchers who study bias think that if they learn more, it might be possible. “It’s not that we don’t know what we want to do,” says Greenwald. “We just don’t know how to do it.”
Kat McGowan is an independent journalist based in Berkeley, California, and New York City. Find her on Twitter @mcgowankat.
Watch: The furor over police violence, says social psychologist Nicholas Epley, is partly caused by the limits of introspection.
The lead photograph is courtesy of Thomas Hawk via Flickr.