Facts So Romantic

The End of Reductionism Could Be Nigh. Or Not.

Quantum mechanics seems to have a problem with the order of time, which might signal the need for an entirely new type of law.Illustration by Ekaterina Kulaeva / Shutterstock

The history of science so far has been a triumph of reductionism. Biology can be reduced to chemistry, chemistry can be reduced to atomic physics, and atoms are made of elementary particles like electrons, quarks, and gluons. The currently known 25 elementary particles might again be made of yet-to-be found smaller entities—most of my colleagues in particle physics certainly think so. But if Chiara Marletto is right, the success of reductionism has run its course. The next better laws of nature might come from macroscopic objects, not microscopic ones. Her secret ingredient? Counterfactuals.

Imagine Adolf Hitler had never been born. That’s counter to facts—a “counterfactual” in the common terminology. In The Science of Can and Can’t, her first book (excerpted in Nautilus) Marletto uses a broader definition of counterfactual: Counterfactuals are facts about what could or could not be. Contemporary physics, she argues, does not take into account counterfactuals and this “is getting in the way of progress.” 

It is certainly true that contemporary physics has shortcomings, quite possibly because the laws that we currently use in physics all work the same way. First, they require us to fully specify the configuration of a system at one moment in time, known as the initial condition. Then, we have an equation—often called the dynamical law or evolution equation—which acts on the initial condition. Finally, by applying the evolution equation to the initial state, we can calculate the configuration of the system at any moment of time. For example, if you specify the position and initial velocity of an arrow, you can use Newton’s laws to calculate where it will land.

It’s turning reductionism on its head.

All fundamental theories in physics currently use this division of initial condition and evolution equation. And these theories have gotten us far, but, as Marletto points out, they have limits. The most important one may be that no such theory can ever explain its own initial condition: We have to provide the initial condition as input to make predictions, otherwise we can’t calculate anything. This means, for example, if we ever want to figure out, say, why the universe began the way it began, we’d need a different type of theory. 

The current theories also struggle to explain how complexity grows in the universe. We know the universe can give rise to complexity—just look around you—but physicists don’t know how to identify this property in their combination of initial condition and evolution equation. And quantum mechanics seems to have a problem with the order of time, which might signal the need for an entirely new type of law. 

Marletto’s book is the summary of several years of work with British physicist David Deutsch, together with whom she developed constructor theory. My best attempt at explaining constructor theory is that it posits that the laws of nature distinguish between what is possible and what not. And the best way to describe this may be, not with an initial condition and an evolution law, but with macroscopic objects—the constructors—that can perform certain tasks, but not others. What matters about those constructors is not what they are made of, but what they can and can’t do.

Energy, for example, can neither be created nor destroyed—we know that this is empirically correct to high accuracy. That’s a statement about an impossibility. Or take the fact that quantum mechanics does not allow us to perfectly copy the state of a system without destroying the original—another impossibility. Perpetuum mobiles—machines which beat the second law of thermodynamics—are also not possible. 

Physicists currently treat such impossibilities as a consequence of the laws of nature. Constructor theory suggests we instead take them as the starting point: Begin with properties that macroscopic objects can have, then deduce what these objects’ microscopic constituents must be able to do. It’s turning reductionism on its head. 

The general idea sounds promising to me, but constructor theory has remained notoriously vague since Deutsch proposed it in 2012. I was hoping Marletto’s book would shed light on it. Alas, having read the book, I am not much wiser. Not only does she not explain how one would actually work with constructor theory, or what it’s good for, she makes a number of assertions about physics that I found utterly baffling.

To begin with, Marletto argues that current physics doesn’t use counterfactuals. That’s the supposed problem that she claims her approach can solve. But of course we use counterfactuals in physics. Each time we make a probabilistic weather forecast, we make a statement about what could have happened had the initial condition been different. Each time we calculate the structure of galactic filaments in the universe, we make a statement about what could have happened had the distribution of matter been another one. Each time we calculate correlations in quantum mechanics, we make a statement about what could have happened had we measured something else. These are all counterfactuals. Marletto doesn’t mention these techniques. 

Instead, she chides “reductionists” because they allegedly “dismiss” emergent criteria—such as information, thermodynamics, and consciousness—as “outside of science.” But I have never met a reductionist who would do this. To the very contrary, reductionists usually stress that consciousness can be explained by science, exactly because it emerges from the interaction of the many particles that make up a brain.

In contrast to what Marletto writes, reductionists also often emphasize that an explanation in terms of emergent properties should not be dismissed just because it is not fundamental. As a matter of fact, emergent theories are normally the more useful ones at their level of application. To offer an example that Marletto herself discusses, no one in their right mind would describe the function of a computer in terms of the elementary particles it is made up of. Reductionism says that this is possible, not that it is useful. These two things (descriptions on macroscopic levels more useful yet derived) are not in contradiction with each other.

Get the Nautilus newsletter

The newest and most popular articles delivered right to your inbox!


I believe part of the problem is that Marletto neither explains what she means by reductionism nor by emergent. I have, above, used emergent to mean a property of a composite system that can be derived from the laws of the system’s constituents, but which doesn’t make sense on the level of constituents. Conductivity, for example, is a property of materials, but it makes no sense for individual electrons. Temperature is another example. Waves in water, cyclones, the capacity to self-reproduce—these are all emergent properties. 

Philosophers often more specifically refer to this as “weak emergence.” “Strong emergence” instead is the hypothetical existence of properties in macroscopic systems that cannot be derived from the laws of the constituents. There is no known example in the real world for strong emergence (which is why physicists normally use the word “emergence” as synonym for “weak emergence”). That ontological reductionism (for all we currently know) is correct, means that strong emergence doesn’t exist: Everything can, in principle, be derived from (“reduced to” as a philosopher might say) the laws of elementary particles and the forces between them. 

Given how essential this distinction (and also the distinction between different types of reductionism) is for Marletto’s argument, it would have merited clarification. It has also remained unclear to me just what one would actually need those constructors for. 

The one example that she discusses where constructor theory can achieve something that the current theories cannot is to provide macroscopic laws that are truly irreversible. The weakly emergent laws which we obtain by the standard method are only, for all practical purposes, irreversible but remain, in principle, reversible. This means that some reversed processes (like unbreaking an egg) are theoretically possible but in practice they are so incredibly unlikely that we never observe them. That’s the standard explanation, anyway. 

In constructor theory, such a process could be rendered impossible indeed, rather than just unlikely. But the standard approach is compatible with all observations, so really there is no need to change anything about it.

In the end, I have found Marletto’s book pleasant to read but not terribly illuminating. Then again, maybe I was expecting too much. It is difficult to communicate the appeal of an idea while it is in the early stages of development (ask me how I know). And yet, this is what much of science is about, so I appreciate Marletto’s effort to give the reader an insight into an exciting, young research topic. If you read The Science of Can and Can’t, keep in mind that you could be witnessing scientific history in the making. 

Sabine Hossenfelder is a Research Fellow at the Frankfurt Institute for Advanced Studies where she works on modifications of general relativity, phenomenological quantum gravity, and the foundations of quantum mechanics. If you want to know more about what is going wrong with the foundations of physics, read her book Lost in Math: How Beauty Leads Physics Astray. Follow her on Twitter @skdh.

2 Comments - Join the Discussion