Physicists have always hoped that once we understood the fundamental laws of physics, they would make unambiguous predictions for physical quantities. We imagined that the underlying physical laws would explain why the mass of the Higgs particle must be 125 gigaelectron-volts, as was recently discovered, and not any other value, and also make predictions for new particles that are yet to be discovered. For example, we would like to predict what kind of particles make up the dark matter.

These hopes now appear to have been hopelessly naïve. Our most promising fundamental theory, string theory, does not make unique predictions. It seems to contain a vast landscape of solutions, or “vacua,” each with its own values of the observable physical constants. The vacua are all physically realized within an enormous eternally inflating multiverse.

Our problems arise because the multiverse is an infinite expanse of space and time.

Has the theory lost its mooring to observation? If the multiverse is large and diverse enough to contain some regions where dark matter is made out of light particles and other regions where dark matter is made out of heavy particles, how could we possibly predict which one we should see in our own region? And indeed many people have criticized the multiverse concept on just these grounds. If a theory makes no predictions, it ceases to be physics.

But an important issue tends to go unnoticed in debates over the multiverse. Cosmology has *always* faced a problem of making predictions. The reason is that all our theories in physics are dynamical: The fundamental physical laws describe what will happen, given what already is. So, whenever we make a prediction in physics, we need to specify what the initial conditions are. How do we do that for the entire universe? What sets the *initial *initial conditions? This is science’s version of the old philosophical question of First Cause.

The multiverse offers an answer. It is not the enemy of prediction, but its friend.

The main idea is to make probabilistic predictions. By calculating what happens frequently and what happens rarely in the multiverse, we can make statistical predictions for what we will observe. This is not a new situation in physics. We understand an ordinary box of gas in the same way. Although we cannot possibly keep track of the motion of all the individual molecules, we can make extremely precise predictions for how the gas as a whole will behave. Our job is to develop a similar statistical understanding of events in the multiverse.

This understanding could take one of three forms. First, the multiverse, though very large, might be able to explore only a finite number of different states, just like an ordinary box of gas. In this case we know how to make predictions, because after a while the multiverse forgets about the unknown initial conditions. Second, perhaps the multiverse is able to explore an infinite number of different states, in which case it never forgets its initial conditions, and we cannot make predictions unless we know what those conditions are. Finally, the multiverse might explore an infinite number of different states, but the exponential expansion of space effectively erases the initial conditions.

In many ways, the first option is the most agreeable to physicists, because it extends our well-established statistical techniques. Unfortunately, the predictions we arrive at disagree violently with observations. The second option is very troubling, because our existing laws are incapable of providing the requisite initial conditions. It is the third possibility that holds the most promise for yielding sensible predictions.

But this program has encountered severe conceptual obstacles. At root, our problems arise because the multiverse is an infinite expanse of space and time. These infinities lead to paradoxes and puzzles wherever we turn. We will need a revolution in our understanding of physics in order to make sense of the multiverse.

The first option for making statistical predictions in cosmology goes back to a paper by the Austrian physicist Ludwig Boltzmann in 1895. Although it turns out to be wrong, in its failure we find the roots of our current predicament.

Boltzmann’s proposal was a bold extrapolation from his work on understanding gases. To specify completely the state of a gas would require specifying the exact position of every molecule. That is impossible. Instead, what we can measure—and would like to make predictions for—is the coarse-grained properties of the box of gas, such as the temperature and the pressure.

A key simplification allows us to do this. As the molecules bounce around, they will arrange and rearrange themselves in every possible way they can, thus exploring all their possible configurations, or “microstates.” This process will erase the memory of how the gas started out, allowing us to ignore the problem of initial conditions. Since we can’t keep track of where all the molecules are, and anyway their positions change with time, we assume that any microstate is equally likely.

#### This Physics Pioneer Walked Away from It All

Inside the South London offices of Doppel, a wearable technology start-up, sandwiched into a single room on a floor between a Swedish coffee shop and a wig-making studio, CEO and quantum physicist Fotini Markopoulou is debating the best way to...**READ MORE**

This gives us a way to calculate how likely it is to find the box in a given coarse-grained state, or “macrostate”: We simply count the fraction of microstates consistent with what we know about the macrostate. So, for example, it is more likely that the gas is spread uniformly throughout the box rather than clumped in one corner, because only very special microstates have all of the gas molecules in one region of the box.

For this procedure to work, the total number of microstates, while very large, must be finite. Otherwise the system will never be able to explore all its states. In a box of gas, this finitude is guaranteed by the uncertainty principle of quantum mechanics. Because the position of each molecule cannot be specified exactly, the gas has only a finite number of distinct configurations.

Gases that start off clumpy for some reason will spread out, for a simple reason: It is statistically far more likely for their molecules to be uniformly distributed rather than clustered. If the molecules begin in a fairly improbable configuration, they will naturally evolve to a more probable one as they bounce around randomly.

A swirling cloud, all of a sudden, just so happens to take the shape of a person.

Yet our intuition about gases must be altered when we consider huge spans of time. If we leave the gas in the box for long enough, it will explore some unusual microstates. Eventually all of the particles will accidentally cluster in one corner of the box.

With this insight, Boltzmann launched into his cosmological speculations. Our universe is intricately structured, so it is analogous to a gas that clusters in one corner of a box—a state that is far from equilibrium. Cosmologists generally assume it must have begun that way, but Boltzmann pointed out that, over the vastness of the eons, even a chaotic universe will randomly fluctuate into a highly ordered state. Attributing the idea to his assistant, known to history only as “Dr. Schuetz,” Boltzmann wrote:

“It may be said that the world is so far from thermal equilibrium that we cannot imagine the improbability of such a state. But can we imagine, on the other side, how small a part of the whole universe this world is? Assuming the universe is great enough, the probability that such a small part of it as our world should be in its present state, is no longer small.

“If this assumption were correct, our world would return more and more to thermal equilibrium; but because the whole universe is so great, it might be probable that at some future time some other world might deviate as far from thermal equilibrium as our world does at present.”

It is a compelling idea. What a shame that it is wrong.

The trouble was first pointed out by the astronomer and physicist Sir Arthur Eddington in 1931, if not earlier. It has to do with what are now called “Boltzmann brains.” Suppose the universe is like a box of gas and, most of the time, is in thermal equilibrium—just a uniform, undifferentiated gruel. Complex structures, including life, arise only when there are weird fluctuations. At these moments, gas assembles into stars, our solar system, and all the rest. There is no step-by-step process that sculpts it. It is like a swirling cloud that, all of a sudden, just so happens to take the shape of a person.

The problem is a quantitative one. A small fluctuation that makes an ordered structure in a small part of space is far, far more likely than a large fluctuation that forms ordered structures over a huge region of space. In Boltzmann and Schuetz’s theory, it would be far, far more likely to produce our solar system without bothering to make all of the other stars in the universe. Therefore, the theory conflicts with observation: It predicts that typical observers should see a completely blank sky, without stars, when they look up at night.

Taking this argument to an extreme, the most common type of observer in this theory is one that requires the minimal fluctuation away from equilibrium. We imagine this as an isolated brain that survives just long enough to notice it is about to die: the so-called Boltzmann brain.

If you take this type of theory seriously, it predicts that we are just some very special Boltzmann brains who have been deluded into thinking that we are observing a vast, homogeneous universe. At the next instant our delusions are extremely likely to be shattered, and we will discover that there are no other stars in the universe. If our state of delusion lasts long enough for this article to appear, you can safely discard the theory.

What are we to conclude? Evidently, the whole universe is not like a box of gas after all. A crucial assumption in Boltzmann’s argument is that there are only a finite (if very large) number of molecular configurations. This assumption must be incorrect. Otherwise, we would be Boltzmann brains.

So, we must seek a new approach to making predictions in cosmology. The second option on our list is that the universe has an infinite number of states available to it. Then the tools that Boltzmann developed are no longer useful in calculating the probability of different things happening.

But then we’re back to the problem of initial conditions. Unlike a finite box of gas, which forgets about its initial conditions as the molecules scramble themselves, a system with an infinite number of available states cannot forget its initial conditions, because it takes an infinite time to explore all of its available states. To make predictions, we would need a theory of initial conditions. Right now, we don’t have one. Whereas our present theories take the prior state of the universe as an input, a theory of initial conditions would have to give this state as an output. It would thus require a profound shift in the way physicists think.

The multiverse offers a third way—that is part of its appeal. It allows us to make cosmological predictions in a statistical way within the current theoretical framework of physics. In the multiverse, the volume of space grows indefinitely, all the while producing expanding bubbles with a variety of states inside. Crucially, the predictions do not depend on the initial conditions. The expansion approaches a steady-state behavior, with the expanding high-energy state continually expanding and budding off lower-energy regions. The overall volume of space is growing, and the number of bubbles of every type is growing, but the ratio (and the probabilities) remain fixed.

The basic idea of how to make predictions in such a theory is simple. We count how many observers in the multiverse measure a physical quantity to have a given value. The probability of our observing a given outcome equals the proportion of observers in the multiverse who observe that outcome.

There is no universal way to define a moment in time.

For instance, if 10 percent of observers live in regions of the multiverse where dark matter is made out of light particles (such as axions), while 90 percent of observers live in regions where dark matter is made out of heavy particles (which, counterintuitively, are called WIMPs), then we have a 10 percent chance of discovering that dark matter is made of light particles.

The very best reason to believe this type of argument is that Steven Weinberg of the University of Texas at Austin used it to successfully predict the value of the cosmological constant a decade before it was observed. The combination of a theoretically convincing motivation with Weinberg’s remarkable success made the multiverse idea attractive enough that a number of researchers, including me, have spent years trying to work it out in detail.

The major problem we faced is that, since the volume of space grows without bound, the number of observers observing any given thing is infinite, making it difficult to characterize which events are more or less likely to occur. This amounts to an ambiguity in how to characterize the steady-state behavior, known as the measure problem.

Roughly, the procedure to make predictions goes as follows. We imagine that the universe evolves for a large but finite amount of time and count all of the observations. Then we calculate what happens when the time becomes arbitrarily large. That should tell us the steady-state behavior. The trouble is that there is no unique way to do this, because there is no universal way to define a moment in time. Observers in distant parts of spacetime are too far apart and accelerating away from each other too fast to be able to send signals to each other, so they cannot synchronize their clocks. Mathematically, we can choose many different conceivable ways to synchronize clocks across these large regions of space, and these different choices lead to different predictions for what types of observations are likely or unlikely.

One prescription for synchronizing clocks tells us that most of the volume will be taken up by the state that expands the fastest. Another tells us that most of the volume will be taken up by the state the decays the slowest. Worse, many of these prescriptions predict that the vast majority of observers are Boltzmann brains. A problem we thought we had eliminated came rushing back in.

When Don Page at the University of Alberta pointed out the potential problems with Boltzmann brains in a paper in 2006, Raphael Bousso at U.C. Berkeley and I were thrilled to realize that we could turn the problem on its head. We found we could use Boltzmann brains as a tool—a way to decide among differing prescriptions for how to synchronize clocks. Any proposal that predicts that we are Boltzmann brains must perforce be wrong. We were so excited (and worried that someone else would have the same idea) that we wrote our paper in just two days after Page’s paper appeared. Over the course of several years, persistent work by a relatively small group of researchers succeeded in using these types of tests to eliminate many proposals and to form something of a consensus in the field on a nearly unique solution to the measure problem. We felt that we had learned how to tame the frightening infinities of the theory.

Just when things were looking good, we encountered a conceptual problem that I see no escape from within our current understanding: the end-of-time problem. Put simply, the theory predicts that the universe is on the verge of self-destruction.

The issue came into focus via a thought experiment suggested by Alan Guth of the Massachusetts Institute of Technology and Vitaly Vanchurin at the University of Michigan in Duluth. This experiment is unusual even by the standards of theoretical physics. Suppose that you flip a coin and do not see the result. Then you are put into a cryogenic freezer. If the coin came up heads, the experimenters wake you up after one year. If the coin came up tails, the experimenters instruct their descendants to wake you up after 50 billion years. Now suppose you have just woken up and have a chance to bet whether you have been asleep for 1 year or 50 billion years. Common sense tells us that the odds for such a bet should be 50/50 if the coin is fair.

But when we apply our rules for how to do calculations in an eternally expanding universe, we find that you should bet that you only slept for one year. This strange effect occurs because the volume of space is exponentially expanding and never stops. So the number of sleeper experiments beginning at any given time is always increasing. A lot more experiments started a year ago than 50 billion years ago, so most of the people waking up today were asleep for a short time.

The scenario may sound extreme, even silly. But that’s just because the conditions we are dealing with in cosmology are extreme, involving spans of times and volumes of space that are outside human experience. You can understand the problem by thinking about a simpler scenario that is mathematically identical. Suppose that the population of Earth doubles every 30 years—forever. From time to time, people perform these sleeper experiments, except now the subjects sleep either for 1 year or for 100 years. Suppose that every day 1 percent of the population takes part.

We still find ourselves making absurd predictions.

Now suppose you are just waking up in your cryogenic freezer and are asked to bet how long you were asleep. On the one hand, you might argue that obviously the odds are 50/50. On the other, on any given day, far more people wake up from short naps than from long naps. For example, in the year 2016, sleepers who went to sleep for a short time in 2015 will wake up, as will sleepers who began a long nap in 1916. But since far more people started the experiment in 2015 than in 1916 (always 1 percent of the population), the vast majority of people who wake up in 2016 slept for a short time. So it might be natural to guess that you are waking from a short nap.

The fact that two logical lines of argument yield contradictory answers tells us that the problem is not well-defined. It just isn’t a sensible problem to calculate probabilities under the assumption that the human population grows exponentially forever, and indeed it is impossible for the population to grow forever. What is needed in this case is some additional information about how the exponential growth stops.

Consider two options. In the first, one day no more babies are born, but every sleeper experiment that has begun eventually finishes. In the second, a huge meteor suddenly destroys the planet, terminating all sleeper experiments. You will find that in option one, half of all observers who ever wake up do so from short naps, while in option two, most observers who ever wake up do so from short naps. It’s dangerous to take a long nap in the second option, because you might be killed by a meteor while sleeping. Therefore, when you wake up, it’s reasonable to bet that you most likely took a short nap. Once the theory becomes well-defined by making the total number of people finite, probability questions have unique, sensible answers.

In eternal expansion, more sleepers wake up from short naps. Bousso, Stefan Leichenauer at Berkeley, Vladimir Rosenhaus at the Kavli Institute for Theoretical Physics, and I pointed out that these strange results have a simple physical interpretation: The reason that more sleepers wake up from short naps is that living in an eternally expanding universe is dangerous, because one can run into the end of time. Once we realized this, it became clear that this end-of-time effect was an inherent characteristic of the recipe we were using to calculate probabilities, and it is there whether or not anyone actually decides to undertake these strange sleeper experiments. In fact, given the parameters that define our universe, we calculated that there is about a 50 percent probability of encountering the end of time in the next 5 billion years.

To be clear about the conclusion: No one thinks that time suddenly ends in spacetimes like ours, let alone that we should be conducting peculiar hibernation experiments. Instead, the point is that our recipe for calculating probabilities accidentally injected a novel type of catastrophe into the theory. This problem indicates that we are missing major pieces in our understanding of physics over large distances and long times.

To put it all together: Theoretical and observational evidence suggests that we are living in an enormous, eternally expanding multiverse where the constants of nature vary from place to place. In this context, we can only make statistical predictions.

If the universe, like a box of gas, can exist in only a finite number of available states, theory predicts that we are Boltzmann brains, which conflicts with observations, not to mention common sense. If, on the contrary, the universe has an infinite number of available states, then our usual statistical techniques are not predictive, and we are stuck. The multiverse appears to offer a middle way. The universe has an infinite number of states available, avoiding the Boltzmann brain problem, yet approaches a steady-state behavior, allowing for a straightforward statistical analysis. But then we still find ourselves making absurd predictions. In order to make any of these three options work, I think we will need a revolutionary advance in our understanding of physics.

*Ben Freivogel is an assistant professor at the University of Amsterdam. He works on fundamental questions in gravity and cosmology. He was a Ph.D student of Leonard Susskind at Stanford University and a postdoc under the guidance of Raphael Bousso at U.C. Berkeley and then Alan Guth at MIT.*

*Lead image: Hubble Space Telescope image of the Egg Nebula. Credit: NASA, W. Sparks (STScI) and R. Sahai (JPL).*

*This article was originally published on* Nautilus Cosmos* in January 2017.*