Footage of a mob storming the Capitol on Jan. 6, 2021, in an effort to subvert the legal and peaceful transfer of power, filled many of us with horror. Underlying that response was our indignation at the brazen violation of central democratic institutions and values.
We respond similarly to news of hate crimes, mass shootings, police brutality, or discriminatory social policies. Such things offend against basic values and principles, such as the inherent worth and equality of persons and their lives, respect for human rights, and the importance of civil society and the rule of law—all of which strike many if not most of us as something more than just matters of taste.
In embracing core moral values and taking a moral stance on important issues we see ourselves as at least trying to get something right—something it matters greatly that we do get right, central to how we structure our lives and find meaning in them.
That is at least what moral practice is like when viewed “from the inside.” By that I mean your moral phenomenology as an engaged participant. This perspective presents the appearance of at least a core of basic objective and universally valid or correct moral standards we are trying to understand and to live by (even as we recognize that we will at best do so imperfectly). This is why we’re prepared to argue, often passionately, in defense of basic moral claims about social justice, for example, in a way we don’t with mere matters of taste; and it’s why we feel compelled to modify our views when someone convinces us that we have a blind spot or other error in our moral outlook.
There is, however, another perspective from which morality may be viewed and examined, where such ideas of correct moral standards, or of the truth of moral claims, are not even on the radar. Here, morality is approached “from the outside,” as a set of empirical phenomena alongside others. The goal here is to understand the factors—from evolutionary biology, psychology, sociology, and so on—that have caused us to develop and use moral concepts, to have certain feelings, and to make certain judgments and be motivated by them.1,2
From this perspective, your moral judgments about the insurrection, or about some racist or sexist policy, are viewed as objects of study rather than as candidates for truth or falsity. The scientist is not here engaging with philosophical issues about the potential accuracy of moral claims. The scientist instead approaches your judgments simply as psychological facts about you, causally traceable to various influences—just as with the thoughts and feelings that led to the social injustices you’re responding to.
How do these two perspectives on morality fit together? This turns out to be important for thinking about the prospects for an objective and universal human morality. Does science leave room for such an idea, and if so, could we, given our nature, be in a position to know such truths and be stably motivated to live according to them?
To a large extent, these two perspectives can peacefully coexist. Being committed to democratic or egalitarian values is no obstacle to exploring how humans evolved to be capable of developing and using such concepts to begin with. (For example we might wonder whether such traits are Darwinian adaptations, and if so, how they enhanced fitness—what was their biological function, and what were the functions of the social constructs they generated?2,3) Nor is such commitment necessarily threatened by accounts of how humans evolved dispositions for specific kinds of moral judgment, feeling, and motivation—or by accounts of how the expressions of such traits were subsequently shaped by history.
Things become trickier, however, when we turn to explaining why we actually hold the particular moral beliefs we do, especially if the scientific explanations are claimed to be sufficient here. Notice that when you consider your moral beliefs “from the inside,” you see them as being held for certain reasons, not just as occurring due to certain causes. These reasons are the considerations you take to justify the beliefs, by constituting good reasons for thinking them to be true.
Does science leave room for an objective and universal human morality?
Consider the belief that it is wrong to deny girls and women the educational opportunities available to boys and men. If asked to explain why you believe this, you will offer your reasons for it. These are the considerations you take to show it to be true that such behavior is wrong, the appreciation of which leads you to hold the belief. Call this a “reason-giving explanation” of belief. You might point out that gender is irrelevant to the capacities involved with learning, that interests served by education are human interests rather than gender-specific ones, and that the harms caused by thwarting those interests are serious.
Your moral belief was not simply caused to happen in you, like a rash caused by biological and environmental factors. Instead, you are led to it through your judgment that these considerations are good reasons for holding the belief because they support the truth of its content. That judgment is itself a morally engaged one, and whether such judgments can be true (as in other domains)—that is, whether there are facts to the effect that certain considerations are good reasons for or against believing certain moral claims—is a philosophical question.
Suppose there are such facts, as there at least appear to be from your perspective. Then, if you’re right in your judgments about these reasons, it seems that you believe that gender discrimination in education is wrong, not merely because of biological or sociological pushes and pulls but because you’re morally competent and recognize good reasons for believing it to be so.
Contrast this with explanations “from the outside” that might take the form: “You just believe that because certain biological and sociological factors have caused you to believe it” (just as certain other biological and sociological factors have caused the opposite belief in others). Such explanations make no reference to ideas like moral truth or good reasons for believing moral contents to be true. Instead they might cite evolutionary influences that gave rise to empathy and a sense of fairness, along with cultural and other environmental influences that shaped and guided those impulses. These factors, it will be said, caused you to form your moral belief about educational policies, and so fully explain it.
Something crucial is missing here when you consider this “from the inside.” Such causal factors have no doubt played important roles in the story leading to your belief. But you won’t accept that you believe what you do about gender discrimination just because those factors caused you to (as other factors pushed someone else to hold opposing moral beliefs). You’ll likely want to reply, “No, I believe that gender discrimination is wrong because it is, and here are the reasons why it is.” And you’ll offer those reasons to defend the belief (and to critique opponents’ claims that other considerations are good reasons for denying it) as part of a moral discussion, not a discussion about mere causes. And if you’re right, then the proper explanation of your belief will appeal to your exercise of moral competence to grasp a moral truth.3,4
It is an open question in philosophy whether moral judgments can be true (and if so, what grounds such truth), and so whether there can, in fact, be good reasons for believing some moral claims rather than others. If the answer is no, then our reason-giving explanations are defective. All there would ultimately be in that case are external explanations for what caused us (erroneously) to think that certain considerations are good reasons for drawing certain moral conclusions. Contrary to how things seem “from the inside,” we really would just believe what we do because of contingent biological or sociological pushes and pulls.
That is only one philosophical possibility, however, and importantly, it is not itself established by science. The sciences, when they look at human thought, feeling, and behavior, do not appeal to philosophical ideas like good reasons for believing moral claims to be true, or attempt to adjudicate the soundness of moral arguments. But neither do they show that there aren’t facts about such things. And if there are—the other philosophical possibility here—then that makes a difference to explaining why we believe at least some of what we do in the moral domain.
Things become trickier when we turn to explaining why we actually hold the particular moral beliefs we do.
Some of our moral beliefs will no doubt be false, based on erroneous judgments about which considerations are good reasons for believing which moral claims. (Since many people’s moral beliefs are logically incompatible they cannot all be true and based on good reasons.) But if there are moral truths that we can grasp, this opens up the possibility that at least some of our moral beliefs amount to knowledge of something real—just as they seem to from our perspective as moral agents. This is what we would be aiming for in critical moral reflection, debate, and refinement.4
I will not attempt to defend the second philosophical picture over the first, which is a large and complex issue. What I want to emphasize is just that it is a live and attractive philosophical possibility. It is not precluded by the existence of the scientific perspective and the causal influences it identifies. We would simply have to claim that there are limits to the reach of such explanations, given the philosophical dimension of the story. That dimension opens up the possibility, at least, that some of our moral beliefs reflect some sort of objective and universal moral reality.
There are two evolutionary challenges to this, however. One is that our evolutionary conditioning might have made it impossible for us to acquire knowledge of objective moral truths, even if they exist. The other is that our evolved psychology might make it impossible in any case for us to live according to objective moral standards.
The claim we are considering is that there are truths about what is good or right that are not simply a function of our beliefs, feelings, attitudes, cultural conventions, and so on. This is not to be confused with the claim that there are, as a matter of empirical fact, universally held moral beliefs. The idea is instead that there are objectively correct moral standards, rooted perhaps in facts about the inherent moral significance of persons (among other things), that apply across cultures and times regardless of whether they have been universally recognized.
Are there objectively correct moral standards that rule out chattel slavery, for example, across cultures and times, making it wrong wherever it occurs (even though this hasn’t always been recognized, as with many other truths, such as those about the shape and motion of the Earth)?
To endorse this idea (roughly what is known as “moral realism”) is not to deny that there can be plenty of morally legitimate variation across cultures and times due to differences in circumstances, or that there are many ways to realize or respect the core values in question. Surely there are many equally good ways to structure a society or to live a life, all consistent with human dignity and rights, for example. The claim is just that some practices will be beyond the pale—not just innocent cultural variations in ways of respecting these values but violations of them. Plausible candidates include things like slavery, rape, racist or sexist discrimination, or cruelty to animals.
The present point is just that one can believe in moral objectivity or universality, in the relevant sense, without being crudely ethnocentric or closed-minded about how much room there is for variation among equally good ways of realizing core values. We needn’t choose between wide-open moral relativism and a rigid insistence that there is just one right way to live.
Still, even if there is an objective, universal morality in this sense, there are both epistemic and motivational challenges here, given our Darwinian background.
Natural selection processes shaped not only human physiology but human psychology as well—our cognitive, affective, and motivational capacities and dispositions. In some cases, these traits are plausibly biological adaptations, having enhanced the fitness of our Pleistocene ancestors, while in others they may be side effects of adaptations. None of this by itself undermines the philosophical possibility I sketched above. But there is a problem if we claim that evolution so pervasively shaped the affective and cognitive dispositions related to moral thinking that we are effectively in the grip of its influence.5,6
This is a version of the “you just believe that because …” claim, and is meant to undermine confidence in our moral judgments. Even if we feel like we have good reasons for our moral beliefs, the “evolutionary debunker” will seek to explain that away by citing evolutionary causes for why our moral beliefs seem plausible to us and why the reasons we cite in their defense seem like good reasons.
You believe your children’s welfare matters, and that you have a moral duty to take care of them? Well, of course you do: Evolution “wired” you to have such feelings and beliefs. And it did so independently of any actual objective moral truths about value or duty: What caused such feelings to be included in your psychology was simply that genes for such feelings and dispositions were more effective in getting themselves replicated down the generations than rival alleles, since our evolutionary ancestors who cared about and for their children contributed more copies of their genes to the gene pool.
Our evolved psychology has altruistic elements that can ideally combine with reason to enable us to live up to such inclusive and egalitarian ideals.
In this picture, even if there are objective moral truths about value and duty, your moral beliefs aren’t tracking such truths but are simply reflecting causal influences governed by Darwinian principles involving genetic propagation for Pleistocene humans. In other words, the shaping of our moral faculties was “morally blind,” based simply on factors relevant to Darwinian fitness; so the beliefs churned out by these faculties can’t be trusted as tracking anything like objective moral facts, even if they exist. Cultural contributions may give particular spins to these evolutionarily-based feelings or beliefs, but that doesn’t get us out of the “garbage-in, garbage-out” problem. So, according to the debunker, knowledge of an objective, universal morality of the sort we’ve considered is not in any case a possibility for us.5,6
I have elsewhere critiqued such debunking arguments.3,4 Here I will just note that the matter turns on whether we should accept the debunker’s contention that our moral beliefs simply reflect “morally blind” causal influences from evolution (with cultural window dressing added), or whether we should instead accept the alternative sketched earlier. This alternative is that, as in other areas of inquiry, we are capable of developing and deploying our evolved faculties, in cultural contexts of rich traditions of inquiry, in ways that are largely independent of specific evolutionary micromanagement. In particular, we can use the large and versatile brains evolution gave us to reflect on our lived experience and knowledge of the world, reason morally about this, and come to recognize good reasons for believing certain moral claims and disbelieving others; we aren’t simply stuck in ruts laid down for us by evolutionary shaping or mere cultural variations on those themes.
We manage similar feats all the time in other areas, after all, cultivating our evolved faculties and using them in ways that go far beyond anything evolution “designed” them for—everything from 11-dimensional physics to poetry or jazz, or for that matter, the sort of philosophical reflection you’re engaging in right now.3,4
My own suggestion is that insofar as you are confident, after critical reflection, that your reasons for believing gender discrimination to be wrong are good ones, you should not rush to accept a debunking explanation of that belief. Similarly with others. We are fallible, and where we are in error, the only explanations for those beliefs will be debunking ones. But that needn’t lead us to worry that debunking explanations apply to our moral beliefs across the board. In the absence of particular moral arguments against them, we needn’t lose confidence in them. Merely pointing to evolutionary influences behind a disposition toward egalitarianism, for example, doesn’t show that your reasons for judging gender discrimination to be wrong aren’t in fact good ones or that you lack knowledge of that wrongness. This may just be a case where evolution partly helped us along (and there will be many others where it did the opposite).4
It’s worth noting, however, that even if our evolutionary background does not debunk our claim to know objective moral truths, the explanatory stories told by debunkers can still be useful in flagging ways in which evolutionary influences might be distorting our moral thinking. This might lead us to subject them to closer rational scrutiny, and in some cases we may find that they don’t stand up well.3,4 A plausible example might be the degree to which we tend to favor members of our in-group over outsiders, or to police gender and sexuality in the name of “morality.”
Even if we are able to discover objective moral standards, some may doubt whether our evolved motivational structures—built for Pleistocene hunter-gatherers—are up to the task of adhering to those standards. This issue is, of course, more pressing the more idealistic these standards turn out to be. Many of us think, for example, that the correct moral standards are far more inclusive than they have traditionally been thought to be. They plausibly embody equal moral considerability of all human beings regardless of race, gender, or nationality, including future generations (as a matter of intergenerational justice); and they likely require us to give greater consideration to the interests of non-human animals than is usually thought.
Our evolved psychology has altruistic elements that can ideally combine with reason to enable us to live up to such inclusive and egalitarian ideals. But we also carry darker baggage as part of our heritage, which just as surely drags us away from them. This was dramatically on display in the January insurrection, and in the years of tribalistic bigotry, fear, and exclusionist thinking that led to it, reflected in far-right movements globally. One might wonder, then, whether “the better angels of our nature” are robust enough to create and sustain a better world—or even, in the face of climate change, to save the one we have.
The answer remains to be seen. There is, however, a theory of evolved human moral nature that provides grounds for cautious optimism. Allen Buchanan and Russell Powell, philosophers at Duke University and Boston University, respectively, have argued that evolution has given us strong conditional dispositions for either inclusivist or exclusivist thought, feeling, and behavior, depending on environmental conditions and cues.7 It’s a model of “adaptive plasticity.”
Under conditions of real or perceived out-group threats involving competition for scarce resources, for example, the disposition for tribalist and exclusivist responses is triggered (for example, emphasizing in-group racial or national identity) and the disposition for more inclusivist thought and feeling is shut down. The reverse is true when such threat cues are absent, and dispositions for more inclusive thought and feeling are activated and able to support intelligent moral reasoning. We are this way, according to the theory, because such conditional dispositions enhanced biological fitness in Pleistocene humans. How well they serve our interests today is another question.
This picture contains both hope and a clear warning.7,8 There is hope for progress if we can foster and maintain socio-institutional contexts that minimize threat cues, so that inclusive and reflective moral thought and feeling may flourish while keeping tribalistic, exclusivist impulses dormant. The difficulty is that it is not only real threats that trigger such impulses but perceived ones as well. This makes us distinctly vulnerable to the large scale, distorting effects of demagogic populism, fueled by the rapid spread of misinformation through social media. And the danger is all the greater with the emergence of figures who expertly exploit powerful threat cues for political advantage. By spewing disinformation and stoking out-group threat cues, they foster loyalty built around racial or national identity (such as white nationalism) and opposition to marginalized others (as in anti-immigrant sentiment)—the opposite of moral progress.
All of this underscores why the recent attacks on our basic institutions and values are so alarming and dangerous. Our prospects for moving closer to the ideals of a plausibly objective and universal morality depend on reversing the trends that have led us here and creating an environment where the better aspects of human nature can lead us forward.8
William J. FitzPatrick is the Gideon Webster Burbank Professor of Intellectual and Moral Philosophy at the University of Rochester and an associate editor at the journal Ethics.
1. de Waal, F. Good Natured: The Origins of Right and Wrong in Humans and Other Animals Harvard University Press, Cambridge, MA (1996).
2. Kitcher, P. The Ethical Project Harvard University Press, Cambridge, MA (2011).
3. FitzPatrick, W.J. Morality and evolutionary biology. The Stanford Encyclopedia of Philosophy (2021).
4. FitzPatrick, W.J. Debunking evolutionary debunking of ethical realism. Philosophical Studies 172, 883-904 (2015).
5. Street, S. A Darwinian dilemma for realist theories of value. Philosophical Studies 127, 109-166 (2006).
6. Joyce, R. The Evolution of Morality MIT Press, Cambridge, MA (2006).
7. Buchanan, A. & Powell, R. The Evolution of Moral Progress: A Biocultural Theory Oxford University Press, Oxford, U.K. (2018).
8. Buchanan, A. Our Moral Fate: Evolution and the Escape from Tribalism MIT Press, Cambridge, MA (2020).
Lead image: Masterlevsha / Shutterstock