Nautilus Members enjoy an ad-free experience. or Join now .

There is just something obviously reasonable about the following notion: If all life is built from atoms that obey precise equations we know—which seems to be true—then the existence of life might just be some downstream consequence of these laws that we haven’t yet gotten around to calculating. This is essentially a physicist’s way of thinking, and to its credit, it has already done a great deal to help us understand how living things work.

Thanks to pioneers like Max Delbrück, who crossed over from physics to biology in the middle of the 20th century, the influence of quantitative analyses from the physical sciences helped to give rise to mechanistic, molecular approaches in cell biology and biochemistry that led to many revolutionary discoveries. Imaging techniques such as X-ray crystallography, nuclear magnetic resonance, and super-resolution microscopy have provided a vivid portrait of the DNA, proteins, and other structures smaller than a single cell that make life tick on a molecular scale.1

Nautilus Members enjoy an ad-free experience. Log in or Join now .

Moreover, by cracking the genetic code, we have become able to harness the machinery of living cells to do our bidding by assembling new macromolecules of our own devising. As we have gained an ever more accurate picture of how life’s tiniest and simplest building blocks fit together to form the whole, it has become increasingly tempting to imagine that biology’s toughest puzzles may only be solved once we figure out how to tackle them on physics’ terms.

We did not know any physics when we invented the word “life.”

Nautilus Members enjoy an ad-free experience. Log in or Join now .

But approaching the subject of life with this attitude will fail us, for at least two reasons. The first reason we might call the fallacy of reductionism. Reductionism is the presumption that any piece of the universe we might choose to study works like some specimen of antique, windup clockwork, so that it is easy (or at least eminently possible) to predict the behavior of the whole once you know the rules governing how each of its parts pushes on and moves with the others.

The dream of explaining and predicting everything from a few simple rules has long captured the imagination of many scientists, particularly physicists. And, in all fairness, a great deal of good science has been propelled forward by the hunger of some researchers for a more completely reductive explanation of the phenomenon that interests them. After all, there are things in the world that can be understood as the result of known interactions among various simpler pieces. From the rise and fall of ocean tides with the moon’s gravitational tug, to the way that some genetic diseases can be traced to molecular events arising from the altered chemistry of one tiny patch on a protein’s surface, sometimes the thing we are studying looks like a comprehensible sum of its parts.

Alas, the hope that all scientific puzzles would be conquered through reductionism was more popular with physicists before the 20th century rolled around. Since then, multiple Nobel laureates in physics (and countless others as well) have written lucidly about how and why reductionist thinking often fails.2 You cannot use Newton’s laws or quantum theory to predict the stock market, nor to predict even much simpler properties of “many-particle” systems, such as a turbulent fluid or a supercooled magnet.3 In all such cases, the physical laws supposedly “governing” it all are swamped with the immensity of what we do not know, cannot measure, or lack the ability to compute directly. Physics still works on such systems, but not solely by starting with fundamental equations governing the microscopic parts.

The second mistake in how people have viewed the boundary between life and non-life is still rampant in the present day and originates in the way we use language. A great many people imagine that if we understand physics well enough, we will eventually comprehend what life is as a physical phenomenon in the same way we now understand how and why water freezes or boils. Indeed, it often seems people expect that a good enough physical theory could become the new gold standard for saying what is alive and what is not.

Nautilus Members enjoy an ad-free experience. Log in or Join now .

However, this approach fails to acknowledge that our own role in giving names to the phenomena of the world precedes our ability to say with any clarity what it means to even call something alive. A physicist who wants to devise theories of how living things behave or emerge has to start by making intuitive choices about how to translate the characteristics of the examples of life we know into a physical language. After one has done so, it quickly becomes clear that the boundary between what is alive and what is not is something that already got drawn at the outset, through a different way of talking than physics provides.

To some degree, a hopeful inclination toward reductionism is expressed in the very asking of the question of where life comes from. We look at a living organism and cannot help but wonder whether such breathtaking success in form and function could simply be the result of a bunch of more basic pieces bouncing off of each other like simple and predictable billiard balls. Is there something more in the machine other than all its dumbly vibrating parts? If there isn’t, shouldn’t that mean we can eventually understand how the whole thing fits together? Put another way, wouldn’t any proposed explanation for the emergence of life have to break it all down into a series of rationalized steps, where each next one follows sensibly and predictably from the last? If so, how is that not the same thing as saying we want to reduce life to a choreographed performance directed by a simple, calculable set of known physical rules?

Biology is most certainly not founded on mathematics in the way that physics is.

Nautilus Members enjoy an ad-free experience. Log in or Join now .

It must be granted that physicists have already identified some rules that prove to make highly accurate predictions in systems that once seemed hopelessly and mysteriously complicated. Thanks to the ideas of people like Kepler and Newton, the motion of heavenly bodies is now an open book, and our ability to compute where these bright lights in the sky go is such an unremarked banality that it is now possible to get an extensive education in physics at many a great university without ever delving into the specialty sideshow of rigorous orbital mechanics. Imagine, though, being a brilliant natural philosopher at any point during most of human history, and marveling at the seemingly intractable complexity of how the sun, moon, and stars seem to continually rearrange themselves in the firmament as the days and years pass. The idea that a terse pair of equations describing gravitation and motion under force could bring distant galaxies, the wandering planets, and boxes dangling by coiled springs all into one comprehensive theoretical frame must have been inconceivable even to the greatest genius of every era for thousands of years. The scope and significance of the revolution that started with Newton and his contemporaries are hard to overstate.

And then came the 20 century! Einstein began with contemplating the equations that describe the motion of light, and through sheer force of insight ended up reimagining the origins of gravity, so as to finally explain the last remaining puzzle of planetary motion that Newton could not touch (namely, Mercury). Meanwhile, Erwin Schrödinger’s quantum mechanical wave equation unlocked the atom, providing an elegant quantitative explanation for the colors of light emitted from various types of electrified gases. This was a bizarre, unintuitive theory of the mathematical inner workings of objects too small to be seen or touched, yet it could still match experimental measurements with stunning accuracy. In the wake of these grand scientific victories, one might forgive the odd scientist or two for feeling like all unpredictability might eventually be swept away as newer and ever more brilliant theories arrived.

On closer inspection, however, this hit parade of wins for reductive theoretical science reveals some bias. What these and many other examples of successful physical theories have in common is that they perform best when trying to predict a well-isolated piece of the world described by a relatively simple mathematical formulation involving a few different things one can measure—the one-planet solar system, the single, solitary hydrogen atom, and so on. In each of these cases, the theory succeeds by filtering out the rest of the universe and focusing on a few equations that accurately describe the relationships among a small number of physical quantities.

The fact is, there are many ways in which the extreme reductionist, armed with a powerful supercomputer, is going to miss the mark by miles when trying to compute the behavior of the whole directly from the simple rules obeyed by its parts. As physics Nobel laureate P.W. Anderson once famously wrote: “More is different.”4 And while we may well succeed at coming up with very good physical theories of things like freezing crystals or viscous fluids, it will not be because we have started by perfecting our detailed models of the atoms or subatomic particles out of which these things are built.

Nautilus Members enjoy an ad-free experience. Log in or Join now .

There’s no question that molecular biology has its own long and venerable history as a hard science in its own right. Thanks to countless experiments on molecules, cells, tissues, and whole organisms, it is now abundantly clear that the marvelously diverse functional capabilities of a living thing all have sound bases in the physical properties of their material parts.

However, this is not to say that reductionism reigns; on the contrary, the “more is different” idea of emergent properties rears its head everywhere in the study of how life works. Blood, for example, is a liquid that flows through veins and carries oxygen, and its biochemical capacity to absorb and release oxygen is well understood in terms of the atomic structure of a protein on red blood cells known as hemoglobin. At the same time, though, a quantity such as the viscosity of blood (which in theory results from mixing water molecules with plasma proteins and many other components) would be utterly impossible for anyone to predict precisely from first principles. The number of different factors contributing to how a given cell or molecule slides by another in such a heterogeneous mixture is so particular and complexly sensitive to small differences in the interaction properties of each pair of components that there will never be a computation as reliable and informative as just doing the experiment to measure what the empirical answer is.

The extreme reductionist, armed with a supercomputer, is going to miss the mark by miles.

Nautilus Members enjoy an ad-free experience. Log in or Join now .

Yet this empirical answer is important! Life thrives in the realm of the particular, where quite specific and precise properties are achieved by its components that could trigger catastrophic failures if they turned out differently. We cannot assume that any small change to how sluggishly blood slides through a vessel, for example, or to the DNA sequence that instructs the cell how to build a particular protein, will necessarily only make a small difference to how the living thing functions as a whole. Life is a grab bag of different pieces, some of whose physical properties are easier to predict mechanistically than others, and it is certainly the case that at least some of the factors that matter a great deal to how a living thing works will fall into the category of highly non-universal emergent properties that are impossible to derive from first principles.

At base, this challenge will always keep popping up, because talking in physical terms is never the same thing as talking in biological ones, and so biologically important questions are not picked for their physical tractability. Instead, biological and physical ways of talking ground themselves in very different conceptual spaces.

Physics is an approach to science that roots itself in the measurement of particular quantities: distance, mass, duration, charge, temperature, and the like. Whether we are talking about making empirical observations or developing theories to make predictions, the language of physics is inherently metrical and mathematical. The phenomena of physics are always expressed in terms of how one set of measurable numbers behaves when other sets of measurable numbers are held fixed or varied. This is why the genius of Newton’s Second Law, F = ma, was not merely that it proposed a successful equation relating force (F), mass (m), and acceleration (a), but rather that it realized that these were all quantities in the world that could be independently measured and compared in order to discover such a general relationship.

This is not how the science of biology works. It is true that doing excellent research in biology involves trafficking in numbers, especially these days: For example, statistical methods help one gain confidence in trends discovered through repeated observations (such as a significant but small increase in the rate of cell death when a drug is introduced). Nonetheless, there is nothing fundamentally quantitative about the scientific study of life. Instead, biology takes the categories of living and nonliving things for granted as a starting point, and then uses the scientific method to investigate what is predictable about the behavior and qualities of life. Biologists did not have to go around convincing humanity that the world actually divides into things that are alive and things that are not; instead, in much the same way that it is quite popular across the length and breadth of human language to coin terms for commonplace things like stars, rivers, and trees, the difference between being alive and not being alive gets denoted with vocabulary.

Nautilus Members enjoy an ad-free experience. Log in or Join now .

In short, biology could not have been invented without the preexisting concept of life to inspire it, and all it needed to get going was for someone to realize that there were things to be discovered by reasoning scientifically about things that were alive. This means, though, that biology most certainly is not founded on mathematics in the way that physics is. Discovering that plants need sunlight to grow, or that fish will suffocate when taken out of water, requires no quantification of anything whatsoever. Of course, we could learn more by measuring how much sunlight the plant got, or timing how long it takes for the fish-out-of-water to expire. But the basic empirical law in biological terms only concerns itself with what conditions will enable or prevent thriving, and what it means to thrive comes from our qualitative and holistic judgment of what it looks like to succeed at being alive. If we are honest with ourselves, the ability to make this judgment was not taught to us by scientists, but comes from a more common kind of knowledge: We are alive ourselves, and constantly mete out life and death to bugs and flowers in our surroundings. Science may help us to discover new ways to make things live or die, but only once we tell the scientists how to use those words. We did not know any physics when we invented the word “life,” and it would be strange if physics only now began suddenly to start dictating to us what the word means.

Jeremy England is senior director in artificial intelligence at GlaxoSmithKline, principal research scientist at Georgia Tech, and the former Thomas D. and Virginia W. Cabot career development associate professor of physics at MIT. This essay is adapted from England’s new book Every Life Is on Fire: How Thermodynamics Explains the Origins of Living Things.

Read our interview with Jeremy England, “The Physicist’s New Book of Life.” 

Nautilus Members enjoy an ad-free experience. Log in or Join now .

Footnotes

1. Watson, J.D. & Crick, F.H.C. Molecular structure of nucleic acids. Nature 171, 737–738 (1953); Wüthrich, K. Protein structure determination in solution by NMR spectroscopy. Journal of Biological Chemistry 265, 22059-22062 (1990); Rust, M.J., Bates, M., & Zhuang, X. Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM). Nature Methods 3, 793 (2006).

2. Laughlin, R.B. & Pines, D. The theory of everything. Proceedings of the National Academy of Sciences 97, 28–31 (2000); Anderson, P.W. More is different. Science 177, 393–396 (1972).

Nautilus Members enjoy an ad-free experience. Log in or Join now .

One should say that both Anderson and Laughlin do not mean to argue that systems with many components are wholly unpredictable; on the contrary, they both made their careers discovering predictability in such devilishly complex systems. However, what often happens in the so-called world of hard condensed matter (i.e., metals and more exotic solid-state materials) is that the way of cutting through the multitudes and seeing order in the whole is to realize that the collective behavior must be governed by some very specific symmetries of the system at hand.

This can get quite mathematically rarefied, but for a simple example imagine a flat, planar lattice of arrows pointing every which way in the plane. Suppose that each arrow’s energy is lower to the extent that it points in the same direction as its neighbors. Clearly, the energy is therefore lowest for the collective when all the arrows point in the same direction. Yet symmetry tells us that the lowest energy state should not exhibit an average bias to point in any one direction, because the overall way we determine the energy of the system looks exactly the same when we rotate our perspective. The resolution is to realize that there are infinitely many equivalent lowest-energy states, with all arrows aligned with each other, but with each collectively aligned state pointing in a different direction.

3. It is worth stating specifically why one might have ever imagined something so outlandish as the idea that quantum theory could be used to predict the stock market. The point is that, from the perspective of a physicist, all the people and documents and computers and phones and factories and mines and forests and winds (and everything else) that act to determine the price of a stock are made of atoms. The way these atoms bind together into molecules is described quite well by known equations that govern the interaction of electrical charge, light, and matter on the tiniest of scales. So why do we not try to predict stocks (and indeed, all the events of the world influencing the stocks) using these equations? Not only does the sheer scale of the computation required to represent such fine details put the task far beyond reach, but we also have little way of knowing most of the numbers that would serve as input to the model. Accordingly, much the same way that the shareholders may not easily get to know all that is ailing a publicly traded company, we also, by default, know very little about exactly what each atom or molecule on the planet is doing. Instead of trying to measure every one of those details, we are much better served making predictive models that paint a simpler picture of the thing we are trying to model (for example, by just positing that prices are determined by a balance between supply and demand).

4. Anderson, P.W. More is different. Science 177, 393–396 (1972).

Nautilus Members enjoy an ad-free experience. Log in or Join now .

Lead image: Sergey Nivens

close-icon Enjoy unlimited Nautilus articles, ad-free, for as little as $4.92/month. Join now

! There is not an active subscription associated with that email address.

Join to continue reading.

Access unlimited ad-free articles, including this one, by becoming a Nautilus member. Enjoy bonus content, exclusive products and events, and more — all while supporting independent journalism.