Moral luck isn’t just a philosopher’s toy concept. It’s reflected in our legal system. Suppose that you and your roommate, Riley, get equally drunk and both drive home separately on similar routes. Let’s say both of you are equally skilled drivers but also equally impaired, and just by chance, you kill somebody crossing the street while Riley gets home without incident. We criminally treat you much more harshly than Riley even though Riley would have killed somebody but got lucky—no pedestrians got in the way—and got home. People typically feel you are more blameworthy than Riley for what happened, even though both of your decisions and actions—drunk driving—were identical.
So, luck matters to the law. Do you think that’s right, or smart? Ninety percent of people polled about moral luck claim to disagree with it as an abstract principle—they don’t think luck should ever morally matter. Nevertheless, when asked to judge people’s moral culpability in hypothetical situations, moral luck had a clear effect, notes Heather C. Lench, a psychologist at Texas A&M University, and colleagues, in a 2014 study. Most people think we shouldn’t use moral luck, even though, in practice, those same people do.
Moral luck might have come about because of a general correlation we see in the world between bad outcomes and personal responsibility. If someone gets hurt physically or emotionally from, say, losing a loved one, someone is often responsible (or held responsible)—maybe there was a fight, or neglect of someone in need. If nobody gets hurt, blame is never an issue. Bad outcomes are also more salient than good outcomes—a well-documented effect called the negativity bias. So when there’s a bad outcome—a valuable object goes missing from your home without your permission or knowledge—it tends to trigger moral judgment. “Who took it? Probably someone who harbors ill-will against me.”
We can think people are bad simply for having false beliefs about what’s going to happen.
The tendency to blame people for bad outcomes might have led to an overgeneralization, so that when somebody gets hurt, but it’s nobody’s fault, we blame somebody anyway: If Shep was supervising the kid when he got hurt, we might blame Shep even if the injury was a complete accident. This would suggest, however, that thinking about how things might have turned out differently—that is, thinking that Shep might have acted identically with nobody getting hurt—might change how much we’d blame him.
Turns out that thinking about different outcomes does affect our moral judgment. In the 2014 study, Lench and her colleagues asked people to consider a man on a bridge throwing a brick onto a freeway below without being able to see where the brick might land. People recommended harsher punishments for the man if someone got hurt, but they suggested more lenient punishments for the man after being asked to imagine how someone might not have gotten hurt.
Outcome, though, is not the only parameter you can adjust to vary these scenarios. In 2010, MIT cognitive scientist Liane Young and her colleagues noticed that many previous studies of moral luck had a confounding effect. They were, in theory, testing moral scenarios (like the drunk-driving and brick-throwing examples) in which the outcome mattered, but they didn’t control for what the agent in these scenarios believed. Could it be that it wasn’t just the outcome of your action that determined your moral blame, but also having a false belief about what was going to happen?
To test this, they described to people a scenario that went like this: Mitch is about to bathe his two-year-old son. The bath is full, and his son is standing by the tub. The phone rings, and Mitch asks his son to wait by the tub while he answers it. Young gave different versions of the story—either Mitch’s son often did what he was told or didn’t, got in the tub this time or didn’t, and drowned in the tub or didn’t—and asked how blameworthy Mitch was in these different scenarios.
Here is where Young’s participants saw the moral difference. Even in the versions of the story where his son generally did what he was told and didn’t drown—but was rather happily enjoying his bath alone—people blamed Mitch more when he was wrong about his son staying put than when he was right. That is, Mitch was blameworthy even without a negative outcome and reasonably believing his son would stay out of the tub. This shows that we can think people are bad simply for having false beliefs about what’s going to happen in the future, even when being right turns out to be a matter of luck.
Whether we should hold people morally responsible for luck, or for having false beliefs, is beyond the reach of science. But experiments can show us how we tend to make judgments. We’re not always consciously weighing outcomes, beliefs, and intent, but many studies show that they have effects on our judgments nonetheless.
This knowledge can feed back into our moral judgments—and perhaps should. Rather than purely going with our naked moral intuitions, we can incorporate what we know about the origin of those gut reactions, and how they interact with our rational deliberation, and temper our ideas about what we should consider right and wrong or worthy of praise or blame.
Jim Davies is an associate professor at the Institute of Cognitive Science at Carleton University in Ottawa, and author of Riveted: The Science of Why Jokes Make Us Laugh, Movies Make Us Cry, and Religion Makes Us Feel One with the Universe. His sister is novelist JD Spero.
WATCH: Ethicists Robert Sparrow and Julian Savulescu entertain the idea of lifting all performance-enhancement bans in the Olympics.
The lead image is courtesy of Billy Wilson via Flickr.