Nautilus Members enjoy an ad-free experience. or Join now .

Quantum mechanics isn’t rocket science. But it’s well on the way to take the place of rocket science as the go-to metaphor for unintelligible math. Quantum mechanics, you have certainly heard, is infamously difficult to understand. It defies intuition. It makes no sense. Popular science accounts inevitably refer to it as “strange,” “weird,” “mind-boggling,” or all of the above.

We beg to differ. Quantum mechanics is perfectly comprehensible. It’s just that physicists abandoned the only way to make sense of it half a century ago. Fast forward to today and progress in the foundations of physics has all but stalled. The big questions that were open then are still open today. We still don’t know what dark matter is, we still have not resolved the disagreement between Einstein’s theory of gravity and the standard model of particle physics, and we still do not understand how measurements work in quantum mechanics.

Nautilus Members enjoy an ad-free experience. Log in or Join now .

How can we overcome this crisis? We think it’s about time to revisit a long-forgotten solution, Superdeterminism, the idea that no two places in the universe are truly independent of each other. This solution gives us a physical understanding of quantum measurements, and promises to improve quantum theory. Revising quantum theory would be a game changer for physicists’ efforts to solve the other problems in their discipline and to find novel applications of quantum technology.

Quantum Mechanics Is Everywhere

Nautilus Members enjoy an ad-free experience. Log in or Join now .

Until now, physicists and philosophers likewise took it for granted that it’s not quantum mechanics which has shortcomings, but our understanding of it. For this reason, their efforts to make sense of it have focused on reinterpreting its mathematics, hoping that things will click into place eventually. This has not happened and it’s not going to happen. That’s because the problem with quantum mechanics is not one of interpretation. The problem is that all existing interpretations of quantum mechanics have internal contradictions and those can only be resolved by a better theory. Quantum mechanics cannot be how nature works on the most fundamental level; we have to move beyond it.

Trouble is, no one knows why quantum effects disappear when one tries to measure them.

In all fairness, complaining about the shortcomings of quantum mechanics and calling for its replacement is quite the worst of insults to hurl at a theory so dramatically successful and accurate. Credit where credit is due, so let us emphasize that—weird or not—quantum mechanics, at now more than 100 years of age, has done the most amazing job, and has more than earned its fellowship of devoted physicists.

Without quantum mechanics, we would not have lasers, would not have semi-conductors and transistors, would not have computers, digital cameras, or touch-screens. We would not have magnetic spin resonance, electron tunnel microscopes, or atomic clocks. Nor would we have any of the countless applications based on all these technologies. We’d have no Wi-Fi, no artificial intelligence, no LEDs, and modern medicine basically wouldn’t exist because most imaging tools and analysis methods now rely on quantum mechanics. Last but not least, no one would ever have heard of quantum computers.

Nautilus Members enjoy an ad-free experience. Log in or Join now .

There is no doubt, therefore, that quantum mechanics is enormously relevant for society. But for the same reasons, there is no doubt that much could be gained from better understanding it.

Nobody Understands Quantum Mechanics

So why have even prominent physicists repeatedly stated quantum mechanics cannot be understood?

The central ingredient of quantum mechanics are mathematical objects called wave functions. Wave functions describe elementary particles, and since everything is made of elementary particles, wave functions describe everything. So, there’s a wave function for electrons, a wave function for atoms, a wave function for cats, and so on. Strictly speaking, everything has quantum behavior, it’s just that in daily life most quantum behavior is not observable.

Nautilus Members enjoy an ad-free experience. Log in or Join now .

Trouble is, no one knows why quantum effects disappear when one tries to measure them. This “measurement problem” has bugged physicists ever since they thought up quantum mechanics. Parts of the puzzle have meanwhile been solved, but this partial solution is still unsatisfactory.

HIDDEN VARIABLE: The outcome of a dice-throw is unpredictable because it’s sensitive to details, like the motion of your hand. Since this is information we don’t have, throwing dice is random for all practical purposes. This is how to make sense of quantum mechanics. With missing information, the outcome of a quantum measurement could follow.serpeblu / Shutterstock

To see the problem, consider you have a single particle and two detectors, one left, one right. If you send the particle left, the left detector ticks. If you send the particle right, the right detector clicks. So far, so unsurprising. But in quantum mechanics you can do more than that: You can have a particle that is in two states at the same time. You can, for example, send it through a beam-splitter so that afterward it is going both left and right. Physicists say the particle is in a “superposition” of left and right.

But you never observe a particle in a superposition of measurement outcomes. For such a superposition, the wave function of the particle will not let you predict what you measure; you can only predict the probability of what you measure. Let’s say it predicts 50 percent left and 50 percent right. Such a prediction makes sense for a collection of particles, or for a sequence of repeated measurements, but makes no sense for a single particle. The detector either clicks or it doesn’t. There’s just isn’t such a thing as a 50 percent measurement.

Nautilus Members enjoy an ad-free experience. Log in or Join now .

Mathematically, “clicks or doesn’t click” requires that we update the wave function at the moment of measurement, so that after measurement the particle is 100 percent in the detector in which it was indeed measured.

Quantum mechanics cannot be how nature works on the most fundamental level; we have to move beyond it.

This update (aka the “collapse” of the wave function) is instantaneous; it happens at the same time everywhere. It may appear as if this was in conflict with Einstein’s speed-of-light limit. However, an observer cannot exploit this to send information faster than light because the observer has no control over what the measurement outcome is.

Indeed, the simultaneity of the measurement update is not the main problem. The main problem is that if quantum mechanics was—as most physicists believe—a theory for elementary particles, then the measurement update should be superfluous. The detector, after all, is also made of elementary particles, so we should be able to calculate what happens in a measurement.

Nautilus Members enjoy an ad-free experience. Log in or Join now .

Unfortunately, it is not only that we do not know how to calculate what the detector does when hit by a particle unless we just postulate the update. It is far worse: We know it is not possible.

We know it is not possible to correctly describe a quantum measurement without the update of the wave function, because the measurement process is more complicated than the behavior of the wave function when we do not observe it. The measurement process has the main purpose of removing superpositions of measurable outcomes. An unmeasured wave function, to the contrary, maintains superpositions, which is simply not what we observe. We have never encountered a detector that both clicks and doesn’t click.

Formally this means that while quantum mechanics is linear (maintains superpositions) the measurement process is “non-linear”; it belongs to a class of theories more complicated than quantum mechanics. This is an important clue to improve quantum mechanics, but it has gone almost entirely unnoticed.

Instead, physicists have swept the conundrum of quantum measurement under the rug by denying that the wave function even describes single particles. The most widely accepted interpretation of quantum theory has it that the wave function describes not the particle itself, but an observer’s knowledge of what the particle does. This knowledge, reasonably enough, should be updated when we make a measurement. What this knowledge is about, you are not supposed to ask.

Nautilus Members enjoy an ad-free experience. Log in or Join now .

This interpretation, however, does not remove the problem that if quantum mechanics was fundamental, then we should be able to calculate what happens during measurement. Talking about “knowledge” held by an “observer” also refers to macroscopic objects whose behavior should be derivable, at least in principle, from the behavior of elementary particles. And again, we already know that this is not possible because the measurement process is not linear. One cannot resolve an inconsistency by reinterpreting the math, only by correcting the math.

The Obvious Solution

There are only two ways out of this conundrum. One is to reject reductionism and accept that some things that large objects do cannot be derived from what their constituents do, not even in principle.

Rejecting reductionism is popular among philosophers, but exceedingly unpopular among scientists, and for good reasons. Reductionism has been remarkably successful and is empirically well-established. Even more importantly, no one has ever proposed a consistent, non-reductionist theory of nature (unless you count verbose pamphlets that are not even wrong). And abandoning reductionism without proposing a better explanation is not only useless, it is outright anti-scientific. It does not help us make progress.

Nautilus Members enjoy an ad-free experience. Log in or Join now .

The other logical solution is that quantum mechanics is just not a fundamental theory, and its problems are a glimpse of a deeper layer of reality.

If quantum mechanics is not a fundamental theory, then the reason we cannot predict outcomes of quantum measurements is simply that we lack information. Quantum randomness, then, is no different from the randomness in, say, throwing a dice.

Universal relatedness, this idea’s defining feature, does not reveal itself on the level of elementary particles.

The outcome of a dice-throw is predictable in principle. It is unpredictable in practice because it is very sensitive to even the most miniscule details, like the exact motion of your hand, imperfections in the shape of the die, or the roughness of the surface on which it rolls. Since this is information we do not have (or even if we had it, wouldn’t be able to calculate with) throwing dice is random for all practical purposes. The best prediction we can make is to say that when we average over the unknown, exact details, the probability of any face coming out up is 1 in 6.

Nautilus Members enjoy an ad-free experience. Log in or Join now .

This is how to make sense of quantum mechanics. Measurement results might be predictable in principle. It is just that we are missing information. The wave function, then, would not itself be a description of a single particle. It would be an average over all the details we are missing. This would explain why quantum mechanics makes only probabilistic predictions. And while the underlying, new theory must reproduce the predictions of quantum mechanics in cases we have tested already, if we had this theory we could also tell in which cases we should see deviations from quantum mechanics.

This idea is supported by the fact that the empirically confirmed equation which determines the behavior of the wave function is almost identical to the equation physicists use to describe the behavior of collections of particles, not single particles.

Historically, this way of making sense of quantum mechanics has been called a “hidden variables theory.” “Hidden variables,” here, is a collective term for all unknown information from which, if we had it, the outcome of a quantum measurement would follow.

These hidden variables, it must be emphasized, are not necessarily properties of the particles themselves. Indeed, the idea that they are is strongly disfavored by experiment already. Viable hidden variables encode missing information in the global configuration of the system. Therefore, while a theory of hidden variables is reductionist in the sense that quantum mechanics can be derived from it, the new physics does not reside on distances so short they must be tested with huge particle accelerators.

Nautilus Members enjoy an ad-free experience. Log in or Join now .

How Physics Took the Wrong Path

Let us emphasize that theories with hidden variables are not interpretations of quantum mechanics. They are different theories which describe nature more accurately and can indeed solve, rather than talk away, the measurement problem.

Needless to say, we are not the first to point out that quantum mechanics walks and talks like a theory for averages. This is probably what springs to everybody’s mind when confronted with random measurement outcomes. And hidden variables have been considered by physicists since the early days of quantum mechanics. But then they erroneously concluded the option is not viable, an error that persists today.

The mistake physicists made decades ago was to draw the wrong conclusion from a mathematical theorem proved by John Bell in 1964. This theorem shows that in any theory in which hidden variables let us predict measurement outcomes, the correlations between measurement outcomes obey a bound. Since then, countless experiments have shown that this bound can be violated. It follows that the type of hidden variables theories to which Bell’s Theorem applies are falsified. The conclusion that physicists drew is that quantum theory is correct and hidden variables not.

Nautilus Members enjoy an ad-free experience. Log in or Join now .

Free will, while it tends to ignite furious debate, is entirely tangential to understanding quantum physics.

But Bell’s Theorem makes an assumption which is itself unsupported by evidence: That the hidden variables (whatever they are) are independent of the settings of the detector. This assumption—called “statistical independence”—is reasonable as long as an experiment only involves large objects like pills, mice, or cancer cells. (And indeed, in this case a violation of statistical independence would strongly suggest the experiment had been tampered with.) Whether it holds for quantum particles, however, no one knows. Because of this we can equally well conclude that the experiments which test Bell’s Theorem, rather than supporting quantum theory, have proved that statistical independence is violated.

Hidden variables theories that violate statistical independence give Superdeterminism its name. Shockingly enough, they have never been ruled out. They have never even been experimentally tested because that would require a different type of experiment than what physicists have done so far. To test Superdeterminism, one would have to look for evidence that quantum physics is not as random as we think it is.

The core idea of Superdeterminism is that everything in the universe is related to everything else because the laws of nature prohibit certain configurations of particles (or make them so unlikely that for all practical purposes they never occur). If you had an empty universe and placed one particle in it, then you could not place the other ones arbitrarily. They’d have to obey certain relations to the first.

Nautilus Members enjoy an ad-free experience. Log in or Join now .

This universal relatedness means in particular that if you want to measure the properties of a quantum particle, then this particle was never independent of the measurement apparatus. This is not because there is any interaction happening between the apparatus and the particle. The dependence between both is simply a property of nature that, however, goes unnoticed if one deals only with large devices. If this was so, quantum measurements had definite outcomes—hence solving the measurement problem—while still giving rise to violations of Bell’s bound. Suddenly it all makes sense!

It is difficult to explain why physicists spent half a century with an inconsistent theory but never seriously considered that statistical independence may just be violated. We suspect part of the reason is that the rather technical assumption of statistical independence has become metaphorically related to the free will of the experimenter. Humans are cognitively biased to believe in free will and this bias likely contributed to physicists collectively turning a blind eye on a promising explanation.

The issue of free will has become tied to Superdeterminism because it seems that if statistical independence is violated, the experimenter is not free to choose both the settings of their apparatus and the preparation of the particle to be measured. But free will, while it tends to ignite furious debate, is entirely tangential to understanding quantum physics. Detector settings can be chosen by machines. And violating statistical independence does not mean that the experimenter is somehow prevented from choosing a setting they like. It merely means that their setting is part of the information that determines the measurement outcome.

The real issue is that there has been little careful analysis of what exactly the consequences would be if statistical independence was subtly violated in quantum experiments. As we saw above, any theory that solves the measurement problem must be non-linear, and therefore most likely will give rise to chaotic dynamics. The possibility that small changes have large consequences is one of the hallmarks of chaos, and yet it has been thoroughly neglected in the debate about hidden variables.

Nautilus Members enjoy an ad-free experience. Log in or Join now .

Low Risk, High Pay-Off

Given the technological relevance of quantum mechanics, moving beyond it would be a major scientific breakthrough. But because of the historical legacy, researchers who have worked on or presently work on Superdeterminism have been either ignored or ridiculed. As a consequence, the idea has remained sorely underdeveloped.

Due to the dearth of research, we have to date no generally applicable theory for Superdeterminism. We do have some models that provide a basis for understanding the violation of the Bell inequality, but no formalism remotely as flexible as the existing theory of quantum mechanics. While Superdeterminism makes some predictions that are largely model-independent, such that measurement outcomes should be less randomly distributed than in quantum mechanics, it is easy to criticize such predictions because they are not based on a full-blown theory. Experimentalists do not want to even test the idea because they do not take it seriously. But we are unlikely to find evidence of Superdeterminism by chance. Universal relatedness, which is this idea’s defining feature, does not reveal itself on the level of elementary particles. Therefore, we do not believe that probing smaller and smaller distances with bigger and bigger particle accelerators will help solve the still-open fundamental questions.

It does not help that most physicists today have been falsely taught the measurement problem has been solved, or erroneously think that hidden variables have been ruled out. If anything is mind-boggling about quantum mechanics, it’s that physicists have almost entirely ignored the most obvious way to solve its problems.

Nautilus Members enjoy an ad-free experience. Log in or Join now .

Sabine Hossenfelder is a physicist at the Frankfurt Institute for Advanced Studies, Germany. Tim Palmer is a Royal Society Research Professor in the Department of Physics, at the University of Oxford, UK.

Lead image: agsandrew / Shutterstock

Nautilus Members enjoy an ad-free experience. Log in or Join now .

Additional Reading

Palmer, T.N. Discretisation of the bloch sphere, fractal invariant sets and Bell’s Theorem. arXiv:1804.01734 (2020).

Hossenfelder, S. & Palmer, T.N. Rethinking superdeterminism. arXiv:1912.06462 (2019).

close-icon Enjoy unlimited Nautilus articles, ad-free, for as little as $4.92/month. Join now

! There is not an active subscription associated with that email address.

Join to continue reading.

Access unlimited ad-free articles, including this one, by becoming a Nautilus member. Enjoy bonus content, exclusive products and events, and more — all while supporting independent journalism.

! There is not an active subscription associated with that email address.

This is your last free article.

Don’t limit your curiosity. Access unlimited ad-free stories like this one, and support independent journalism, by becoming a Nautilus member. $9.99/month. Cancel anytime.