A century ago, physics breakthroughs came in rapid sequence. There was quantum mechanics and Einstein’s theories of space and time, lots of new particles, two new nuclear forces, and eventually the standard model of particle physics. This progress and its technological applications commanded respect, if not outright fear.

But today, the foundations of physics are a sleepy place. We’re still chewing on the same problems that we had a century ago—and all that chewing hasn’t made them any more digestible. What is dark matter? What does quantum mechanics really mean? And why does gravity refuse to cooperate with quantum physics? These are problems that, when I can’t sleep, I like to think have already kept Einstein up at night.

Since then, many ideas for solutions have been put forward for each of these problems, but it is rare for a truly new one to see the light of the day. This is why I was very excited to see the recent publications of Jonathan Oppenheim, a professor of quantum theory at University College London.

I have met Oppenheim a few times in the past because we share a similar intellectual history. Oppenheim and I both used to work on black holes, more specifically on the question of whether black holes truly destroy information. It seems that we both came to conclude the problem cannot be solved without first understanding how space, time, and quantum physics work together. But there, our ways parted. While I put the blame for the black hole information paradox on quantum physics, Oppenheim blamed gravity.

The idea is simple: Make gravity as random as quantum physics.

In all fairness, putting the blame on gravity makes more sense because gravity is the odd one out among the fundamental forces of nature. While electromagnetism (the union of the electric and magnetic force), the strong nuclear force (which keeps atomic nuclei together), and the weak nuclear force (responsible for nuclear decay) are all described by quantum processes, gravity is not.

Gravity is, as physics parlance has it, a “classical” or non-quantum theory, still described by Einstein’s theory of general relativity. It is a deterministic theory, which means that future events can be deduced from past events. Quantum mechanics breaks with this determinism: It brings in an inherent randomness, unpredictable quantum jumps that happen whenever you measure a particle.

According to quantum mechanics, this randomness is fundamental; it is not due to our lack of information—it’s just how nature is. As a consequence, with quantum mechanics we cannot make definite predictions, only probabilistic ones. We might be able to say, for example, that an atom decays with 50 percent probability within 10 days, but not exactly when it will decay.

Einstein did not like this at all. He thought that the randomness inherent in quantum mechanics just means that the theory is incomplete, that something is missing from it. A good theory, he believed, should be classical, like his own general relativity and, as Einstein quipped, if his theory was wrong, then he would be sorry for the good Lord.

So far, we have had no need to be sorry for the Lord. Einstein’s masterwork has been tested countless times to utmost precision and held up to any attempt trying to falsify it. Empirical successes notwithstanding, most physicists think it is wrong. The reason is that general relativity cannot describe some situations that we know occur in nature.

Take a simple example that could happen in a laboratory right at this moment. An electron is sent through a plate with two thin slits, a double-slit. It’s a particle with quantum properties, so the electron can go through both slits at the same time. This isn’t just a story; we know that this is necessary to properly describe our observations on the screen behind the slit.

We also know that electrons have masses and that masses generate a gravitational pull. But if the electron goes through a double-slit, where is the gravitational pull directed? Einstein’s general relativity cannot answer this question because it cannot deal with a particle that is in two places at the same time.

This is not the only situation with which Einstein’s math can’t cope. There is also the question of what happens inside black holes, that Oppenheim and I and many other physicists were trying to answer. A similar problem occurs with the Big Bang. Einstein’s theory just isn’t fit to deal with these cases. We need something better, a theory that combines quantum physics with gravity, usually called “quantum gravity.”

Physicists had been discussing this problem already in the 1930s, around the same time that dark matter was first discovered. For decades, they thought that Einstein’s gravity could be turned into a quantum theory, the same way physicists had done for the electromagnetic force.

Come the 1960s, it turned out that this didn’t work. Richard Feynman and Bryce DeWitt, among others, tried to give quantum properties to gravity using the already known mathematics. But the resulting theory (now known as “perturbatively quantized gravity”) did not work. When extrapolated to extreme situations like the Big Bang or the inside of black holes—exactly the places that we are most interested in!—it produced incurable infinities. Those led to predictions of probabilities larger than one, useless mathematical nonsense that didn’t help physicists understand what was really happening.

After that, there were many other attempts to turn gravity into a theory of quantum gravity: string theory, loop quantum gravity, asymptotically safe gravity, causal dynamical triangulations, and some more. They all have their pros and cons, but to make a long story short, their cons have prevented them all from being convincing (to anyone but the people who work on them).

If the past 50 years have taught us one thing, it’s that the problem of reconciling quantum physics with gravity is much more difficult than anyone thought it would be. After so much trial and failure, it certainly seems that we are missing something big.

Oppenheim’s new theory could be what physicists have been missing. His idea is, on the face of it, quite simple: Rather than trying to give quantum properties to gravity, just make gravity as random as quantum physics, so that the two fit together.

Loosely speaking, Oppenheim postulates that space and time, combined to spacetime, constantly make small random changes. According to this theory, spacetime constantly shifts imperceptibly around us. This shifting—not to be confused with the wiggling of gravitational waves—ties together with gravity because in Einstein’s theory, gravity is described by the curvature of spacetime itself.

In Oppenheim’s framework, then, the random changes of spacetime affect the motion of quantum particles, and those quantum particles in return affect the changes of spacetime. It is a two-way process neatly consistent with John Wheeler’s one-line summary of general relativity that “Spacetime tells matter how to move; matter tells spacetime how to curve.”

If the theory is right, Einstein is wrong. God does indeed play dice.

But combining gravity and quantum physics this way sounds easier than it is. The problem that Oppenheim faced is that there was no mathematics to make that junction of quantum and classical physics happen: Physicists had mathematics to deal with quantum systems, and mathematics to deal with non-quantum systems, but no mathematics to deal with a mixture of both. So, Oppenheim had to go and develop that mathematics himself.

I read some of his early work about five years ago and, to be honest, I wasn’t particularly excited. (Jonathan, if you are reading this, I was the referee who wrote that the idea is “not uninteresting” but “very speculative, immature, and vague.” And I stand by that, because really I felt that the first attempt created more problems than it solved.) But Oppenheim did not give up, and five years have made a big difference.

Reading Oppenheim’s new papers—published in the journals *Nature Communications* and *Physical Review X*—about what he dubs “Post-Quantum Gravity,” I have been impressed by how far he has pushed the approach. He has developed a full-blown framework that combines quantum physics with classical physics, and he tells me that he has another paper in preparation which shows that he can solve the problem of infinites that plague the Big Bang and black holes.

It is worthwhile to mention that Oppenheim’s theory has close mathematical relatives in the foundations of quantum mechanics.

The standard formalism of quantum mechanics brings in the random element suddenly, in the moment when a measurement in a quantum experiment happens. Prior to a measurement, a quantum system can have many possible outcomes, but once a measurement has been made, these possibilities “collapse” to one actuality.

Say you send a quantum of light, a photon, through a semi-transparent plate known as a beam-splitter. According to quantum mechanics, what happens is not that half of the photons go through and the other half are reflected, but all photons split into both possibilities. Yet, once you measure whether the photon went through or didn’t, you either detect it or you don’t. So the two possibilities have collapsed to one.

In quantum mechanics, this collapse is a discontinuous and also faster-than-light process, which is difficult to reconcile with Einstein’s ideas about how nothing travels faster than light. The reason the faster-than-light proceedings of the measurement collapse do not cause outright problems with Einstein’s mathematics is that the collapse is not observable. This is because the only thing you observe is the outcome of the measurement—that is, the aftermath of the collapse—and not what happened before it. But it being not observable also means that we cannot know that it is truly discontinuous. And that opens the possibility to try and replace it with something better, something more compatible with Einstein’s ideas.

One approach to this has been to remedy this sudden measurement collapse by turning it into a gradual process. This idea has been pursued in what has been collectively called “objective collapse models.” In these models, a particle does not make one big random jump to one “actuality” at the time of measurement, but many small adjustments that add up to what we call the collapse. Oppenheim’s approach mimics this idea, but also links it to gravity as the cause of those random jumps. It is gravity that mixes in the non-quantum element that takes over at the end of the measurement process.

That gravity is the cause of the measurement collapse by itself is an idea that was pioneered by Roger Penrose. Now, Penrose’s approach works completely differently than Oppenheim’s but still they are related in that it is ultimately gravity which is responsible for the apparent collapse of quantum possibilities.

Oppenheim’s post-quantum gravity changes both gravity and quantum physics. This is good news because it is extremely difficult to experimentally test changes to the law of gravity that concern quantum particles, because gravity is such an extremely weak force on small scales compared to the other fundamental forces. However, it is much easier to experimentally test deviations from quantum mechanics because those can be measured very precisely with new quantum technologies, and that opens a way to find out whether Oppenheim’s idea holds up.

Models in which gravity is not a quantum theory generally tend to increase the uncertainty inherent in quantum physics. This might sound somewhat counterintuitive because quantum physics is known for its uncertainty, so one might think that leaving gravity a non-quantum theory would reduce this uncertainty rather than increasing it. Alas, leaving gravity non-quantum means that it doesn’t fit to the quantum properties which we know particles to have—like being in two places at the same time. Making the two fit together then amplifies the randomness of quantum physics. In practice this can lead to an increase in jittering, or an unexpectedly large spread of measurement results.

This amplification of quantum uncertainty also happens in Oppenheim’s model. In this case, the origin is easy to pinpoint: The additional uncertainty comes from the postulated randomness of spacetime. This opens the possibility to experimentally test it, for example by precisely tracking the gravitational pull of objects and see if it unexpectedly fluctuates.

This can be done, for example, by a standard (Cavendish-type) experiment that measures the gravitational attraction between two objects by suspending them from wires and tracking how much the wires drill as the objects attract each other. The current experiments of this type rule out one variant of the post-quantum model, and if the sensitivity of experiments can be further increased—which I am sure someone, somewhere is working on—then that can tell us whether the still viable variant makes its appearance in unexpected fluctuations.

I don’t want to withhold from you that I think Oppenheim’s theory is wrong because it remains incompatible with Einstein’s cherished principle of locality, which says that causes should only travel from one place to its nearest neighbours and not jump over distances. I suspect that this is going to cause problems sooner or later, for example with energy conservation. Still, I might be wrong.

If Oppenheim’s right, it would mean Einstein was both right and wrong: right in that gravity remained a classical, non-quantum theory, and wrong in that God did play dice indeed. And I guess for the good Lord, we would have to be both sorry and not sorry.

*Lead image: Lia Koltyrina / Shutterstock*