ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. or Join now .
Sign up for the free Nautilus newsletter:
science and culture for people who love beautiful writing.
NL – Article speedbump

My conclusion is that in a colonized universe the probability of the annihilation of the human race could actually rise rather than fall.Illustration by David Revoy / Blender Foundation / Wikicommons

Nautilus Members enjoy an ad-free experience. Log in or Join now .

There are lots of reasons why colonizing space seems compelling. The popular astronomer Neil deGrasse Tyson argues that it would stimulate the economy and inspire the next generation of scientists. Elon Musk, who founded SpaceX, argues that “there is a strong humanitarian argument for making life multiplanetary…to safeguard the existence of humanity in the event that something catastrophic were to happen.”  The former administrator of NASA, Michael Griffin, frames it as a matter of the “survival of the species.” And the late astrophysicist Stephen Hawking has conjectured that if humanity fails to colonize space within 100 years, we could face extinction.

To be sure, humanity will eventually need to escape Earth to survive, since the sun will make the planet uninhabitable in about 1 billion years. But for many “space expansionists,” escaping Earth is about much more than dodging the bullet of extinction: it’s about realizing astronomical amounts of value by exploiting the universe’s vast resources to create something resembling utopia. For example, the astrobiologist Milan Cirkovic calculates that some 1046 people per century could come into existence if we were to colonize our Local Supercluster, Virgo. This leads Nick Bostrom to argue that failing to colonize space would be tragic because it would mean that these potential “worthwhile lives” would never exist, and this would be morally bad.

But would these trillions of lives actually be worthwhile? Or would colonization of space lead to a dystopia?

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

In a recent article in Futures, which was inspired by political scientist Daniel Deudney’s forthcoming book Dark Skies, I decided to take a closer look at this question. My conclusion is that in a colonized universe the probability of the annihilation of the human race could actually rise rather than fall.

The argument is based on ideas from evolutionary biology and international relations theory, and it assumes that there aren’t any other technologically advanced lifeforms capable of colonizing the universe (as a recent study suggests is the case).

Consider what is likely to happen as humanity hops from Earth to Mars, and from Mars to relatively nearby, potentially habitable exoplanets like Epsilon Eridani b, Gliese 674 b, and Gliese 581 d. Each of these planets has its own unique environments that will drive Darwinian evolution, resulting in the emergence of novel species over time, just as species that migrate to a new island will evolve different traits than their parent species. The same applies to the artificial environments of spacecraft like “O’Neill Cylinders,” which are large cylindrical structures that rotate to produce artificial gravity. Insofar as future beings satisfy the basic conditions of evolution by natural selection—such as differential reproduction, heritability, and variation of traits across the population—then evolutionary pressures will yield new forms of life.

But the process of “cyborgization”—that is, of using technology to modify and enhance our bodies and brains—is much more likely to influence the evolutionary trajectories of future populations living on exoplanets or in spacecraft. The result could be beings with completely novel cognitive architectures (or mental abilities), emotional repertoires, physical capabilities, lifespans, and so on.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

In other words, natural selection and cyborgization as humanity spreads throughout the cosmos will result in species diversification. At the same time, expanding across space will also result in ideological diversification. Space-hopping populations will create their own cultures, languages, governments, political institutions, religions, technologies, rituals, norms, worldviews, and so on. As a result, different species will find it increasingly difficult over time to understand each other’s motivations, intentions, behaviors, decisions, and so on. It could even make communication between species with alien languages almost impossible. Furthermore, some species might begin to wonder whether the proverbial “Other” is conscious. This matters because if a species Y cannot consciously experience pain, then another species X might not feel morally obligated to care about Y. After all, we don’t worry about kicking stones down the street because we don’t believe that rocks can feel pain. Thus, as I write in the paper, phylogenetic and ideological diversification will engender a situation in which many species will be “not merely aliens to each other but, more significantly, alienated from each other.”

But this yields some problems. First, extreme differences like those just listed will undercut trust between species. If you don’t trust that your neighbor isn’t going to steal from, harm, or kill you, then you’re going to be suspicious of your neighbor. And if you’re suspicious of your neighbor, you might want an effective defense strategy to stop an attack—just in case one were to happen. But your neighbor might reason the same way: she’s not entirely sure that you won’t kill her, so she establishes a defense as well. The problem is that, since you don’t fully trust her, you wonder whether her defense is actually part of an attack plan. So you start carrying a knife around with you, which she interprets as a threat to her, thus leading her to buy a gun, and so on. Within the field of international relations, this is called the “security dilemma,” and it results in a spiral of militarization that can significantly increase the probability of conflict, even in cases where all actors have genuinely peaceful intentions.

So, how can actors extricate themselves from the security dilemma if they can’t fully trust each other? On the level of individuals, one solution has involved what Thomas Hobbes’ calls the “Leviathan.” The key idea is that people get together and say, “Look, since we can’t fully trust each other, let’s establish an independent governing system—a referee of sorts—that has a monopoly on the legitimate use of force. By replacing anarchy with hierarchy, we can also replace the constant threat of harm with law and order.” Hobbes didn’t believe that this happened historically, only that this predicament is what justifies the existence of the state. According to Steven Pinker, the Leviathan is a major reason that violence has declined in recent centuries.

The point is that if individuals—you and I—can overcome the constant threat of harm posed by our neighbors by establishing a governing system, then maybe future species could get together and create some sort of cosmic governing system that could similarly guarantee peace by replacing anarchy with hierarchy. Unfortunately, this looks unpromising within the “cosmopolitical” realm. One reason is that for states to maintain law and order among their citizens, their various appendages—e.g., law enforcement, courts—need to be properly coordinated. If you call the police about a robbery and they don’t show up for three weeks, then what’s the point of living in that society? You’d be just as well off on your own! The question is, then, whether the appendages of a cosmic governing system could be sufficiently well-coordinated to respond to conflicts and make top-down decisions about how to respond to particular situations. To put it differently: If conflict were to break out in some region of the universe, could the relevant governing authorities respond soon enough for it to matter, for it to make a difference?

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Probably not, because of the immense vastness of space. For example, consider again Epsilon Eridani b, Gliese 674 b, and Gliese 581 d. These are, respectively, 10.5, 14.8, and 20.4 light-years from Earth. This means that a signal sent as of this writing, in 2018, wouldn’t reach Gliese 581 d until 2038. A spaceship traveling at one-quarter the cosmic speed limit wouldn’t arrive until 2098, and a message to simply affirm that it had arrived safely wouldn’t return to Earth until 2118. And Gliese 581 is relatively close as far as exoplanets go. Just consider that he Andromeda Galaxy is some 2.5 million light-years from Earth and the Triangulum Galaxy about 3 million light-years away. What’s more, there are some 54 galaxies in our Local Group, which is about 10 million light-years wide, within a universe that stretches some 93 billion light-years across.

These facts make it look hopeless for a governing system to effectively coordinate law enforcement activities, judicial decisions, and so on, across cosmic distances. The universe is simply too big for a government to establish law and order in a top-down fashion.

But there is another strategy for achieving peace: Future civilizations could use a policy of deterrence to prevent other civilizations from launching first strikes. A policy of this sort, which must be credible to work, says: “I won’t attack you first, but if you attack me first, I have the capabilities to destroy you in retaliation.” This was the predicament of the US and Soviet Union during the Cold War, known as “mutually-assured destruction” (MAD).

But could this work in the cosmopolitical realm of space? It seems unlikely. First, consider how many future species there could be: upwards of many billions. While some of these species would be too far away to pose a threat to each other—although see the qualification below—there will nonetheless exist a huge number within one’s galactic backyard. The point is that the sheer number would make it incredibly hard to determine who initiated a first strike, if one is attacked. And without a method for identifying instigators with high reliability, one’s policy of deterrence won’t be credible. And if one’s policy of deterrence isn’t credible, then one has no such policy!

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Second, ponder the sorts of weapons that could become available to future spacefaring civilizations. Redirected asteroids (a.k.a., “planetoid bombs”), “rods from God,” sun guns, laser weapons, and no doubt an array of exceptionally powerful super-weapons that we can’t currently imagine. It has even been speculated that the universe might exist in a “metastable” state and that a high-powered particle accelerator could tip the universe into a more stable state. This would create a bubble of total annihilation that spreads in all directions at the speed of light—which opens up the possibility that a suicidal cult, or whatever, weaponizes a particle accelerator to destroy the universe.

The question, then, is whether defensive technologies could effectively neutralize such risks. There’s a lot to say here, but for the present purposes just note that, historically speaking, defensive measures have very often lagged behind offensive measures, thus resulting in periods of heightened vulnerability. This is an important point because when it comes to existentially dangerous super-weapons, one only needs to be vulnerable for a short period to risk annihilation.

So far as I can tell, this seriously undercuts the credibility of policies of deterrence. Again, if species A cannot convince species B that if B strikes it, A will launch an effective and devastating counter strike, then B may take a chance at attacking A. In fact, B does not need to be malicious to do this: it only needs to worry that A might, at some point in the near- or long-term future, attack B, thus making it rational for B to launch a preemptive strike (to eliminate the potential danger). Thinking about this predicament in the radically multi-polar conditions of space, it seems fairly obvious that conflict will be extremely difficult to avoid.

The lesson of this argument is not to uncritically assume that venturing into the heavens will necessarily make us safer or more existentially secure. This is a point that organizations hoping to colonize Mars, such as SpaceX, NASA, and Mars One should seriously contemplate. How can humanity migrate to another planet without bringing our problems with us? And how can different species that spread throughout the cosmos maintain peace when sufficient mutual trust is unattainable and advanced weaponry could destroy entire civilizations?

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Human beings have made many catastrophically bad decisions in the past. Some of these outcomes could have been avoided if only the decision-makers had deliberated a bit more about what could go wrong—i.e., had done a “premortem” analysis. We are in that privileged position right now with respect to space colonization. Let’s not dive head-first into waters that turn out to be shallow.

Phil Torres is the director of the Project for Human Flourishing and the author of Morality, Foresight, and Human Flourishing: An Introduction to Existential Risks.

WATCH: Should we pessimistic about the deep future?

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .
close-icon Enjoy unlimited Nautilus articles, ad-free, for less than $5/month. Join now

! There is not an active subscription associated with that email address.

Join to continue reading.

You’ve read your 2 free articles this month. Access unlimited ad-free stories, including this one, by becoming a Nautilus member.

! There is not an active subscription associated with that email address.

This is your last free article.

Don’t limit your curiosity. Access unlimited ad-free stories like this one, and support independent journalism, by becoming a Nautilus member.