In the 1960s, the moral philosopher Philippa Foot devised a thought experiment that would revolutionize her field. This ethical puzzle, today known as the “trolley problem,” has become so influential—not just in philosophy but also in neuroscience, behavioral economics, evolutionary psychology, and meme culture—that it’s garnered its own tongue-in-cheek sub-discipline, called “trolleyology.” That body of commentary, wrote one philosopher, “makes the Talmud look like Cliffs Notes.”
The person largely responsible for popularizing the trolley problem was the philosopher Judith Jarvis Thomson. Her 1976 paper, “Killing, Letting Die, and The Trolley Problem,” tweaked the original scenario. In Foot’s version, five workers are on a track in front of a runaway trolley, and you, the conductor, must choose whether to change tracks, which will put you on a collision course with just a single worker—you could be responsible for the death of one instead of five. But in Thomson’s version, you are a bystander on a footbridge positioned behind an extremely fat man. You notice that, if you were to push him onto the tracks, he’d be large enough to derail the runaway train, saving everyone (except him). Should you?
Most recoil at the option, even though sacrificing the fat man would ultimately save five people. On the other hand, changing tracks to avoid killing five people seems like a moral no-brainer, even though one person ends up dying as well. Why we have different moral responses to these numerically identical scenarios seemed, to many sorts of researchers interested in ethics and moral psychology, a question worth asking: Jesse Prinz, a philosopher of psychology at the City University of New York, for instance, thinks moral obligations can be empirically discovered. Others think it may help clarify our moral intuitions; and maybe, by putting subjects in an fMRI when they’re reasoning about this, it could also illuminate why we think the way we do about the dilemma in its different forms. But what if this dilemma is too simplistic to be useful?
That’s what Christopher Bauman, a social psychologist at the University of California, in Irvine, proposed, along with some colleagues, in a 2014 paper. They were “concerned” by these dilemmas for three reasons: First, “they are amusing rather than sobering”; second, “they are unrealistic and unrepresentative of the moral situations people encounter in the real world”; and three, “they do not elicit the same psychological processes as other moral situations.” For example, as the authors go on to say, “People often scoff at the notion that the fat man’s body could really stop a train, question whether there really is no place for workers on the track to go, and dispute whether anyone could really appraise all of the important aspects of the situations with certainty and in time to act.”
While most people aren’t fazed by the certainty assumed in each dilemma (you take it for granted that the fat man will, if you push him off the bridge, stop the train), some can’t get over it. In a 2009 paper on the trolley problem, for instance, authors noted that approximately 5 percent of their subjects in one study, and 12 percent in another, circled the choice, “I did not find the description from the preceding pages to be realistic, and my answers reflect my inability to take seriously the description that was given.”
It turns out that introducing uncertainty changes how people think about the dilemma. In a 2014 study, psychologists Katherine Kortenkamp and Colleen F. Moore first gave subjects a version of the trolley problem where outcomes were guaranteed to occur, and then one where they weren’t. Subjects in the latter case were less likely to think killing one to save five was either “appropriate” or “moral” (they were asked about each). Under uncertainty, “participants may have relied more on deontological,” or rule-based reasoning, “than utilitarian moral reasoning,” the researchers say. The latter relies on being able to know what the consequence of your action will be—but if you are unsure, you might feel it safer, and not just ethically, to act according to a moral precept.
This explanation aligns with the results of a 1992 study titled, “The disjunction effect in choice under uncertainty.” That paper found “that uncertainty of outcomes leads to less consequentialist decision making in nonmoral decision scenarios,” Kortenkamp and Moore say. “These findings suggest that if research on moral judgments and reasoning is to apply to uncertain situations, the fact of uncertainty must be considered.”
Maybe it’s simply time for us to put the brakes on moral dilemmas like the trolley problem, with their built-in, clear-cut consequences and embrace more ambiguity—real life is full of it.
Matthew Sedacca is an editorial intern at Nautilus.
Watch: Simon DeDeo, who studies cosmic microwaves and crime, describes how feedback loops shape our morals.
The lead photograph is courtesy of Erlend Aaby via Flickr.