One of the defining attributes of humans is that we are champion cooperators, surpassing levels of cooperation far beyond what is observed in other species across the animal kingdom. Understanding how cooperation is sustained, particularly in anonymous large-scale societies, remains a central question for both evolutionary scientists and policy makers.
Social scientists frequently use behavioural game theory to model cooperation in laboratory settings. These experiments suggest that “institutional punishment” can be used to sustain cooperation in large groups—a set up analogous to the role governments play in wider society. In the real world, however, corruption can undermine the effectiveness of such institutions.
What we call “corruption” is a smaller scale of cooperation undermining a larger-scale.
In July’s edition of the journal Nature Human Behavior, Michael Muthukrishna and his colleagues Patrick Francois, Shayan Pourahmadi, and Joe Henrich published an experimental study which rather cleverly incorporated corruption into a classic behavioral economic game.
Corruption worldwide remains widespread, unevenly distributed, and costly. The authors cite estimates from the World Bank, stating $1 trillion is paid in bribes alone each year. However, levels of corruption vary considerably across geographies. For example, estimates suggest that in Kenya eight out of 10 interactions with public officials require a bribe. Conversely, indices suggest Denmark has the lowest level of corruption, and the average Dane may never pay a bribe in their lifetime.
Transparency International state that more than 6 billion people live in countries with a serious corruption problem. The costs of corruption range from reduced welfare programmes, to death from collapsed buildings. In other words, corruption can kill.
Muthukrishna’s work suggests that corruption is largely inevitable due to our evolved psychological dispositions; the challenge is apparently to find the conditions where corruption and its detrimental impacts can be minimized. As Muthukrishna is quoted saying in a London School of Economics press release for the paper:
Corruption is actually a form of cooperation rooted in our history, and easier to explain than a functioning, modern state. Modern states represent an unprecedented scale of cooperation that is always under threat by smaller scales of cooperation. What we call “corruption” is a smaller scale of cooperation undermining a larger-scale.
To model corruption, the authors modified a behavioural economic game called the “institutional punishment game.” The participants were anonymous, and came from countries with varying levels of corruption. Overall, 274 participants took part in the study and were provided with an endowment, which they could divide between themselves and a public pool. The public pool is multiplied by some amount and then divided equally among the players, regardless of their contributions.
The institutional punishment game is designed so that it is in every player’s self-interest to let others contribute to the public goods pool, while contributing nothing oneself. However, the gain for the group overall is highest if everybody contributes the maximum possible. Each round one group member is randomly assigned the leader, who can allocate punishments using taxes extracted from other players.
The “bribery game” that Muthukrishna and his colleagues developed is the same as the basic game, except that each player had the ability to bribe the leader. Therefore, the leader could see both each players’ contributions to the public pool, and also the amount each player gave to them personally. The experimenters manipulated the “pool multiplier” (a proxy for economic potential) and the “punishment multiplier” (the power of the leader to punish).
Participants who had grown up in more corrupt countries were more willing to accept bribes.
For each player’s move, the leader could decide to do nothing, accept the bribe offered, or punish the player by taking away their points. Any points offered to the leader that he or she rejected were returned to the group member who made the offer. Group members could see only the leader’s actions toward them and their payoff, but not the leader’s actions toward other group members.
Compared to the basic public goods game, the addition of bribes caused a large decrease in public good provisioning (a decline of 25 percent).
Leaders with a stronger punishment multiplier at their disposal (referred to as “strong leaders”) were approximately twice as likely to accept bribes and were three times less likely to do nothing (such as punish free-riders). As expected by the authors, more power led to more corrupt behaviour.
Having generated corruption, the authors introduced transparency to the bribery game. In the “partial transparency” condition, group members could see not only the leader’s actions toward them, but also the leader’s own contributions to the public pool. However, they did not see the leader’s actions to other group members. In the “full transparency” condition, information on each member and the leader’s subsequent actions was made fully available (that is, individual group members contributions to the pool, bribes offered to the leader, and the leader’s subsequent actions in each case). Although the costs of bribery were seen in all contexts, the detrimental effects were most pronounced in the poor economic conditions.
The experiments demonstrated that corruption mitigation effectively increased contributions when leaders were strong or the economic potential was rich. When leaders were weak (that is, their punitive powers were low and economic potential was poor), the apparent corruption mitigation strategy of full transparency had no effect, and partial transparency actually further decreased contributions to levels lower than that of the standard bribery game.
The authors’ research challenges widely held assumptions about how best to reduce corruption.
The study indicates that corruption mitigation strategies help in some contexts, but elsewhere may cause the situation to deteriorate and can, therefore, backfire. As stated by the authors; “[…] proposed panaceas, such as transparency, may actually be harmful in some contexts.”
The findings are not surprising from a social-psychological perspective, and support a vast literature on the impacts of social norms on behavior. Transparency and exposure to institutional corruption may enforce the norm that most people are engaging in corrupt behaviors, and that such behavior is permissible (or that one needs to also engage in such dealings to succeed). Why partial transparency had a more detrimental impact than full transparency when leaders were weak is not made clear, however.
Remarkably, the authors found that participants who had grown up in more corrupt countries were more willing to accept bribes. The most plausible explanation presented is that exposure to corruption while growing up led to these social norms being internalized, which manifested in these individuals’ behavior during the experiments.
It’s important to note that this is only one experimental study looking into anti-corruption strategies, and that caution is required when extending these research findings to practice. As stated by the authors: “Laboratory work on the causes and cures of corruption must inform and be informed by real-world investigations of corruption from around the globe.”
This aside, the authors’ research challenges widely held assumptions about how best to reduce corruption, and may help explain why the “cures for corruption,” which may prove successful in rich nations, may not work elsewhere. To paraphrase the late Louis Brandeis, “sunlight is said to be the best of disinfectants,” yet this may depend on climatic conditions and the prevalence of pathogens.
Max Beilby is a business psychology practitioner working in the banking industry. He holds a master’s degree in Organisational & Social Psychology from the London School of Economics, and a degree in Business Management from the University of Brighton. He is also a member of the Association for Business Psychology, and the Human Behavior & Evolution Society. This post is reprinted with permission from his blog, Darwinian Business.
Lead image: