ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. or Join now .
Sign up for the free Nautilus newsletter:
science and culture for people who love beautiful writing.
NL – Article speedbump

Many people cheat on taxes—no mystery there. But many people don’t, even if they wouldn’t be caught—now, that’s weird. Or is it? Psychologists are deeply perplexed by human moral behavior, because it often doesn’t seem to make any logical sense. You might think that we should just be grateful for it. But if we could understand these seemingly irrational acts, perhaps we could encourage more of them.

Nautilus Members enjoy an ad-free experience. Log in or Join now .

It’s not as though people haven’t been trying to fathom our moral instincts; it is one of the oldest concerns of philosophy and theology. But what distinguishes the project today is the sheer variety of academic disciplines it brings together: not just moral philosophy and psychology, but also biology, economics, mathematics, and computer science. They do not merely contemplate the rationale for moral beliefs, but study how morality operates in the real world, or fails to. David Rand of Yale University epitomizes the breadth of this science, ranging from abstract equations to large-scale societal interventions. “I’m a weird person,” he says, “who has a foot in each world, of model-making and of actual experiments and psychological theory building.”

Good or evil?: The great Enlightenment philosopher Jean-Jacques Rousseau (left) argued that moral behavior is innate, whereas Thomas Hobbes, a 17th-century English philosopher, maintained that humans are “naturally wicked,” and must be protected from themselves by governments.Wikipedia

In 2012 he and two similarly broad-minded Harvard professors, Martin Nowak and Joshua Greene, tackled a question that exercised the likes of Thomas Hobbes and Jean-Jacques Rousseau: Which is our default mode, selfishness or selflessness? Do we all have craven instincts we must restrain by force of will? Or are we basically good, even if we slip up sometimes?

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

They collected data from 10 experiments, most of them using a standard economics scenario called a public-goods game.1 Groups of four people, either American college students or American adults participating online, were given some money. They were allowed to place some of it into a pool, which was then multiplied and distributed evenly. A participant could maximize his or her income by contributing nothing and just sharing in the gains, but people usually gave something. Despite the temptation to be selfish, most people showed selflessness.

The fuzziness of psychological ideas makes them hard to test. If an experimental result doesn’t fit your theory of human behavior, you can fiddle with the definitions.

This finding was old news, but Rand and his colleagues wanted to know how much deliberation went into such acts of generosity. So in two of the experiments, subjects were prodded to think intuitively or deliberately; in two others, half of the subjects were forced to make their decision under time pressure and half were not; and in the rest, subjects could go at their own pace and some naturally made their decisions faster than others. If your morning commute is any evidence, people in a hurry would be extra selfish. But the opposite was true: Those who responded quickly gave more. Conversely, when people took their time to deliberate or were encouraged to contemplate their choice, they gave less.

The researchers worked under the assumption that snap judgments reveal our intuitive impulses. Our intuition, apparently, is to cooperate with others. Selfish behavior comes from thinking too much, not too little. Rand recently verified this finding in a meta-analysis of 51 similar studies from different research groups.2 “Most people think we are intuitively selfish,” Rand says—based on a survey he conducted—but “our lab experiments show that making people rely more on intuition increases cooperation.”

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

The cooperative impulse isn’t confined to an artificial experimental setting. In another paper, Rand and Ziv Epstein of Pomona College studied interviews with 51 recipients of the Carnegie Hero Medal, who had demonstrated extreme altruism by risking their lives to save others.3 Study participants read the interviews and rated the medalists on how much their thinking seemed intuitive versus deliberative. And intuition dominated. “I’m thankful I was able to act and not think about it,” a college student who rescued a 69-year-old woman from a car during a flash flood explained.

So Rand made a strong case that people are intuitive cooperators, but he considered these findings just the start. It’s one thing to put forward an idea and some evidence for it—lots of past researchers have done that. It’s quite another to describe and explain that idea in a rigorous, mathematical fashion. Ironically, Rand figured he could make better sense of humans by stepping away from studying real ones.

The overwhelming majority of psychological theories are verbal: explanations of the ways people act using everyday language, with maybe a few terms of art thrown in. But words can be imprecise. It may be true that “cooperation is intuitive,” but when is it intuitive? And what exactly does “intuitive” mean? The fuzziness of psychological ideas makes them hard to test. If an experimental result doesn’t fit your theory of human behavior, you can fiddle with the definitions and claim you were right all along.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Rand has sought to create quantitative models. “Science is about developing theories,” he says, “not about developing a list of observations. And the reason formal models are so important is that if your goal is theory-building, then it’s essential that you have theories that are really clearly articulated and are falsifiable.”

To do that, he has developed computer simulations of society—The Sims, basically. These models represent collections of individual people described by computer “agents,” algorithms that capture a specific package of traits, such as a tendency to cooperate or not.4 You can do controlled experiments on these computerized citizens that would be impossible or unethical to do with real people.5 You can endow them with new personalities to see how they’d fare. You can observe social processes in action, on time scales ranging from seconds to generations, instead of just taking a snapshot of a person or group. You can watch the spread of certain behaviors throughout a population and how they influence other behaviors. Over time, the patterns that emerge can tell you things about large-scale social interaction that a lab experiment with a few real people never could.

SIMS-ulator: Agent-based computer models, somewhat like The Sims, can be used to try to explain human behavior by reducing it to its simplest elements. Researchers can perform experiments on computer people that would be unethical in any sort of real-person setting.Wikipedia

One of the first such models, in the early 1970s, studied housing segregation.6 It represented a city as a 16-by-13 grid of squares, populated by two types of people: stars and circles. Each star would move to the nearest location in which at least half its neighbors were also stars—it had a slight bias to be among similar others. Circles did the same. Even these mild biases led quickly to stark segregation, with all-star and all-circle regions of the board—a much more extreme partitioning than any one agent sought. The researcher, the economist Thomas Schelling, used his model to help explain racial segregation in American cities. A neighborhood can splinter into homogeneous patches even when individual residents are hardly prejudiced at all. (Of course, in reality, segregation also reflects outright racism and explicit policies of exclusion.) Schelling’s work became a case study of how a group’s collective behavior can diverge from the desires of any one agent.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Such models have also been used to explore cooperation. In an influential paper in 1981, the political scientist Robert Axelrod programmed agents to play a simple game called the Prisoner’s Dilemma.7 Two players have to decide whether to cooperate with or betray the other, and they receive points based on their choices. The scoring system is set up to mimic an essential dilemma of social life. Together the players perform best if they both cooperate, yet each can maximize his or her own individual outcome, at the expense of the other, by acting selfishly. The game takes its name from a scenario in which the police interrogate two thieves, offering each a reward for ratting out his or her accomplice. The thieves aren’t able to communicate to reach a joint decision; they have to make their decisions independently. Acting rationally, each should rat out the other. But when they both act “rationally,” they actually end up with the most combined jail time.

It’s possible we’re born with a tendency to cooperate, but frequent cooperation (with beneficial results) is required to sustain our benevolence.

The game gets more interesting—and more analogous to real life—when you play multiple rounds with the same partner. Here, repeated cooperation is best not just for both partners as a unit but also for each individually. You can still occasionally double-cross your partner for extra points, however, as long as it doesn’t trigger later betrayal.

What is the best strategy, then? To find out, Axelrod solicited Prisoner’s Dilemma strategies from mathematicians, biologists, economists, political scientists, computer scientists, and physicists from around the world. Axelrod programmed his computerized agents with these strategies and made them play a round-robin tournament. Some strategies were quite sophisticated, but the winner was a simple one called tit-for-tat.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Tit-for-tat resembles human reciprocity. It starts with cooperation and, after that, does whatever the other player did on the previous round. An agent using the strategy extends an olive branch at first. If its opponent reciprocates, it keeps cooperating. But if its opponent double-crosses it, the tit-for-tat agent rescinds its peace offering until its opponent makes amends.

By combining the short-term temptation to be selfish with the long-term benefits of collaboration, the Prisoner’s Dilemma is an ideal model for human cooperation, and Rand has built on Axelrod’s work to understand why evolution might have favored intuitive selflessness.

PRISONER’S DILEMMA: This classic situation captures the essential tradeoff of human cooperation. If two corrupt business executives cooperate, they’ll both walk away with their ill-gotten gains; but their immediate incentive is not to cooperate. When one “defects”—ratting out his partner to the authorities—he keeps those gains for himself while reaping an additional reward. But when both follow this reasoning, both wind up in prison. Cooperation is good only if both parties cooperate—and you can never be sure your partner will do that.Christopher X Jon Jensen (CXJJensen) & Greg Riestenberg / Wikipedia

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Rand and his grad student Adam Bear considered a variant of the Prisoner’s Dilemma in which matchups were either one-shot or multiple-round, chosen at random.8 The computerized agents faced a tough choice. In a one-off, they would score more points by betraying their opponent, whereas in repeated play cooperation made more sense. But the uncertainty made it unclear which strategy was best. Rand and Bear then added a twist. An agent could elect to pay some points at the start of an encounter—representing the efforts of deliberation—to suss out what kind of matchup it would face, so that it could tailor its strategy.

The agent had to decide whether the advantage of foreknowledge outweighed its cost. The price of the tip-off varied randomly, and each agent was programmed with a maximum price it would agree to pay; if the price exceeded that amount, the agent did not receive any advance information and instead chose some default behavior, following its “intuition.” In this way, the simulation allowed for different personality types. Some agents intuitively cooperated, others intuitively betrayed. Some occasionally deliberated, others didn’t.

Is deliberation helpful? That’s not immediately obvious. Intuitive thinking is fast but inflexible. Deliberative thinking can achieve better outcomes but takes time and energy. To see which strategy excelled in the long run, Rand and Bear’s model simulated a process of evolution. A large population of agents played the game with one another and either proliferated or died depending on how well they did. This process can model either genetic evolution or cultural evolution, in which the weak players don’t actually die, but merely adopt stronger strategies through imitation.

Most of us are genuinely good. And if we’re not, we can be encouraged to be. The math is there.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Typically, one strategy swept through the population and replaced the alternatives. This victorious strategy depended on the precise parameters of the game. For example, Rand and Bear varied the probability that matchups would be single- or multiple-round. When most were multi-round, the winning agents defaulted to cooperating but deliberated if the price was right and switched to betrayal if they found they were in a one-shot game. But when most were one-shots, the agents that prevailed were no longer willing to pay to deliberate at all. They simply double-crossed their opponents. In other words, the model produced either wary cooperation or uncompromising betrayal.

This outcome was notable for what was missing. Agents that always cooperated usually died off completely. Likewise, almost no set of game parameters favored agents that defaulted to the double-cross but were sometimes willing to deliberate. Bear and Rand stared at this asymmetry for several weeks, baffled.

Finally, they had a breakthrough. They realized that when your default is to betray, the benefits of deliberating—seeing a chance to cooperate—are uncertain, depending on what your partner does. With each partner questioning the other, and each partner factoring in the partner’s questioning of oneself, the suspicion compounds until there’s zero perceived benefit to deliberating. If your default is to cooperate, however, the benefits of deliberating—occasionally acting selfishly—accrue no matter what your partner does, and therefore deliberation makes more sense.

So, it seems there is a firm evolutionary logic to the human instinct to cooperate but adjust if necessary—to trust but verify. We ordinarily cooperate with other people, because cooperation brings us benefits, and our rational minds let us decipher when we might occasionally gain by acting selfishly instead.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

The model also ties up a loose end from Rand’s earlier studies of public-goods games. In that research, time pressure caused some people to cooperate more, but never caused anyone to cooperate less. This asymmetry now makes sense. The only people who would have shown that behavior were those who were willing to deliberate, but defaulted to betrayal; the time pressure would bring out their Machiavellian inclinations. Evidently such people are rare. If someone is deep-down selfish, rational deliberation will only make them more so. And the evolutionary model shows why. Defectors who have qualms are quickly winnowed out by genetic or cultural evolution.

When it comes to getting people to cooperate more, Rand’s work brings good news. Our intuitions are not fixed at birth. We develop social heuristics, or rules of thumb for interpersonal behavior, based on the interactions we have. Change those interactions and you change behavior.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Rand, Nowak, and Greene tested that idea in their 2012 paper. They asked some subjects whether they’d ever played such economics games before. Those with previous experience didn’t become more generous when asked to think intuitively; they’d apparently become accustomed to the anonymous nature of such games and learned a new intuition. Unfortunately, it was a cynical one: They could get away with mooching off others. Similarly, subjects who reported that they couldn’t trust most of the people in their lives also didn’t become more generous when acting on intuition. It’s possible we’re born with a tendency to cooperate, but frequent cooperation (with beneficial results) is required to sustain our benevolence.

Happily, even the Grinch can expand his heart by three sizes, as Rand demonstrates in a recent study.9 First, he had test subjects play the Prisoner’s Dilemma for about 20 minutes with a variety of opponents. For half of the subjects, the average game lasted eight rounds, meaning cooperation was the best strategy; for half, the average game lasted a single round, which discouraged cooperation. Afterward, everyone played a public-goods game. Those stewed in cooperation gave significantly more money in the second phase of the experiment than did those without it. In less than half an hour, their intuitions had shifted.

How do you encourage cooperation in places where cooperation isn’t the norm? Corporate America comes to mind. “In a lot of situations people are basically rewarded for backstabbing and ladder-climbing,” Rand says. Rand and Bear’s modeling paper, in which intuitive defectors don’t trust each other enough even to consider whether cooperation would pay off, points to an answer. Rand suggests that, at least at first, incentives could come from above, so that the benefits of cooperating don’t depend solely on whether one’s partner cooperates. Companies might offer bonuses and recognition for helpful behavior. Once cooperation becomes a social heuristic, people will begin to cooperate when it benefits them, but also even when it doesn’t. Selflessness will be the new norm.

When selflessness is the norm, encouraging people to make decisions quickly can bring out their better angels. Extensions of this research reveal that we see quick or unthinking acts of generosity as particularly revealing of kindness, and that people may even use this signal strategically. In recent work, Rand and his collaborators have shown that people are faster to make decisions to cooperate when they know someone is watching, as if aware that others will judge them by their alacrity.10 Among other puzzles, Rand is currently trying to untangle this apparent paradox—the strategic use of intuition.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Rand’s work offers a correction to those misanthropes who peer into the hearts of men and women and see shadows. Most of us are genuinely good. And if we’re not, we can be encouraged to be. The math is there.

If you think seeing life as a set of economics games and cooperation as self-interest in disguise sounds dismal, it is actually not so distanced from what you might call virtue. “When I’m nice to other people, I’m not doing it because of some kind of calculation. I’m doing it because it feels good,” Rand says. “And the reason it feels good, I argue, is that it is actually payoff maximizing in the long run.”

Rand then adds a crucial clarification. “It feels good to be nice—unless the other person is a jerk,” he says. “And then it feels good to be mean.”

Tit for tat indeed.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Matthew Hutson is a science writer who’s written for Wired, The Atlantic, and The New York Times. He is the author of The 7 Laws of Magical Thinking.

References

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

1. Rand, D.G., Greene, J.D., & Nowak, M.A. Spontaneous giving and calculated greed. Nature 489, 427-430 (2012).

2. Rand, D.G. Cooperation, fast and slow: Meta-analytic evidence for a theory of social heuristics and self-interested deliberation. Psychological Science Forthcoming (2016).

3. Rand, D.G. & Epstein, Z.G. Risking your life without a second thought: Intuitive decision-making and extreme altruism. PLoS One 9, e109687 (2014).

4. Smith, E.R. & Conrey, F.R. Agent-based modeling: A new approach for theory building in social psychology. Personality and Social Psychology Review 11, 87-104 (2007).

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

5. Smaldino, P.E., Calanchini, J., & Pickett, C.L. Theory development with agent-based models. Organizational Psychology Review 5, 300-317 (2015).

6. Schelling, T.C. Dynamic models of segregation. Journal of Mathematical Sociology 1, 143-186 (1971).

7. Axelrod, R. & Hamilton, W.D. The evolution of cooperation. Science 211, 1390-1396 (1981).

8. Bear, A. & Rand, D.G. Intuition, deliberation, and the evolution of cooperation. Proceedings of the National Academy of Sciences 113, 936-941 (2016).

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

9. Peysakhovich, A. & Rand, D.G. Habits of virtue: Creating norms of cooperation and defection in the laboratory. Management Science 62, 631-647 (2016).

10. Jordan, J.J., Hoffman, M., Nowak, M.A., & Rand, D.G. Uncalculating cooperation as a signal of trustworthiness. Available at SSRN. (2016).

close-icon Enjoy unlimited Nautilus articles, ad-free, for less than $5/month. Join now

! There is not an active subscription associated with that email address.

Join to continue reading.

You’ve read your 2 free articles this month. Access unlimited ad-free stories, including this one, by becoming a Nautilus member — 25% off for a limited time during our seasonal sale.

! There is not an active subscription associated with that email address.

This is your last free article.

Don’t limit your curiosity. Access unlimited ad-free stories like this one, and support independent journalism, by becoming a Nautilus member — 25% off for a limited time during our seasonal sale.