ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. or Join now .
Sign up for the free Nautilus newsletter:
science and culture for people who love beautiful writing.
NL – Article speedbump
Explore

I. Abiogenesis

Nautilus Members enjoy an ad-free experience. Log in or Join now .

How did life on Earth first arise? Despite his clear articulation of the principle of evolution, Charles Darwin didn’t have a clue. In 1863, he wrote to his close friend Joseph Dalton Hooker that “it is mere rubbish, thinking, at present, of origin of life; one might as well think of origin of matter.”

Today, we have more of a clue, although the details are lost to deep time. Biologists and chemists working in the field of abiogenesis—the study of the moment when, 3 or 4 billion years ago, chemistry became life—have developed multiple plausible origin stories. In one, proto-organisms in an ancient “RNA world” were made of RNA molecules, which both replicated and folded into 3-D structures that could act like primitive enzymes.1 In a competing “metabolism first” account, chemical reaction networks sputtered to life in the porous rock chimneys of “black smokers” on the ocean floor, powered by geothermal energy; RNA and DNA came later.2

Either way, even bacteria—the simplest life forms surviving today—are a product of many subsequent evolutionary steps. The most important of these steps may have been large and sudden, not the everyday, incremental mutation and selection theorized by Darwin. These “major evolutionary transitions” involve simpler, less complex replicating entities becoming interdependent to form a larger, more complex, more capable replicator.3

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

We are made out of functions, and those functions are made out of functions, all the way down.

As maverick biologist Lynn Margulis discovered in the 1960s, eukaryotic cells are the result of such a symbiotic event, when the ancient bacteria that became our mitochondria were engulfed by another single-celled life form, related to today’s archaea. At moments like these, the tree of life doesn’t just branch; it also entangles with itself, its branches merging to produce radically new forms. Margulis was an early champion of the idea that these events are what drive evolution’s leaps forward.

It’s likely that bacteria are themselves the product of such symbiotic events—for instance, between RNA and proteins.4 Even the feebly replicating chemical reaction networks in those black smokers can be understood as such an alliance, a set of reactions which, by virtue of catalyzing each other, formed a more robust, self-sustaining whole.

So in a sense, Darwin may have been right to say that “it is mere rubbish” to think about the origin of life, for life may have had no single origin, but rather, have woven itself together from many separate strands, the oldest of which look like ordinary chemistry. Intelligent design isn’t required for that weaving to take place; only the incontrovertible logic that sometimes, an alliance creates something enduring, and that whatever is enduring … endures.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Often, enduring means both occupying and creating entirely new niches. Hence eukaryotes did not replace bacteria; indeed, they ultimately created many new niches for them. Likewise, the symbiotic emergence of multicellular life—another major evolutionary transition—did not supplant single-celled life. Our planet is a palimpsest, with much of its past still visible in the present. Even the black smokers are still bubbling away. The self-catalyzing chemistry of proto-life may still be brewing down there, slowly, on the ocean floor.

II. Computation

While most biochemists have focused on understanding the particular history and workings of life on Earth, a more general understanding of life as a phenomenon has come from an unexpected quarter: computer science. The theoretical foundations of this connection date back to two of the field’s founding figures, Alan Turing and John von Neumann.

After earning a degree in mathematics at Cambridge  University in 1935, Turing focused on one of the fundamental outstanding problems of the day: the Entscheidungsproblem (German for “decision problem”), which asked whether there exists an algorithm for determining the validity of an arbitrary mathematical statement. The answer turned out to be “no,” but the way Turing went about proving it ended up being far more important than the result itself.5

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Turing’s proof required that he define a general procedure for computation. He did so by inventing an imaginary gadget we now call the “Turing Machine.” The Turing Machine consists of a read/write head, which can move left or right along an infinite tape, reading and writing symbols on the tape according to a set of rules specified by a built-in table.

First, Turing showed that any calculation or computation that can be done by hand could also be done by such a machine, given an appropriate table of rules, enough time, and enough tape. He then showed that there exist certain tables of rules that define universal machines, such that the tape itself can specify not only any input data, but also the desired table, encoded as a sequence of symbols. This is a general-purpose computer: a single machine that can be programmed to compute anything.


COMPLEXITY RISING: A video excitedly taken by the author of digital life emerging in one of the first runs of bff, a programming language, on his laptop. Imperfect replicators arise almost immediately, with a sharp transition to whole-tape replication after approximately 6 million interactions, followed by several further symbiotic “complexifications.” As these transitions take place, the number of computations (“ops”) per interaction rises from a few to thousands.

In the early 1940s, von Neumann, a Hungarian-American polymath who had already made major contributions to physics and mathematics, turned his attention to computing. He became a key figure in the design of the ENIAC and EDVAC—among the world’s first real-life Universal Turing Machines, now known as “computers.”

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Over the years, a great deal of thought and creativity has gone into figuring out how simple a Universal Turing Machine can get. Only a few instructions are needed. Esoteric language nerds have even figured out how to compute with just a single instruction (a so-called OISC or “one instruction set computer”).

There are irreducible requirements, though: The instruction, or instructions, must change the environment in some way that subsequent instructions are able to “see,” and there must be conditional branching, meaning that depending on the state of the environment, either one thing or another will happen. In most programming languages, this is expressed using “if/then” statements. When there’s only a single instruction, it must serve both purposes, as with the SUBLEQ language, whose only instruction is “subtract and branch if the result is less than or equal to zero.”

III. Functionalism

Both Turing and von Neumann were keenly aware of the parallels between computers and brains, developing many ideas that would become foundational to neuroscience and AI. Von Neumann’s report on the EDVAC explicitly described the machine’s logic gates as electronic neurons.6 Whether or not that analogy held (it did not; neurons are more complex than logic gates), the key insight here was that both brains and computers are defined not by their mechanisms, but by what they do—their function, both in the colloquial and in the mathematical sense.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

A thought experiment can illustrate the distinction. While we still have much to learn about the brain, biophysicists have thoroughly characterized the electrical behavior of individual neurons. Hence, we can write computer code that accurately models how they respond to electrical and chemical inputs. If we were somehow able to replace one of the neurons in your brain with a computer running such a model, plugging its inputs and outputs as appropriate into neighboring neurons, would the rest of your brain—or “you”—be able to tell the difference?

If the model is faithful, the answer is “no.” That answer remains the same if one were to replace a million neurons … or all of them. What matters, whether at the scale of an individual neuron or a whole brain, is function. We are made out of functions, and those functions are made out of functions, all the way down.

It’s not a metaphor to call DNA a “program”—that is literally the case.

In 1950s popular culture, computers were often thought to be “like” brains for superficial reasons, like the fact that they both rely on electricity. For Turing, such details were irrelevant, and attaching importance to them was mere superstition. A computer could just as well be made out of cogs and gears, like the steampunk “Analytical Engine” Charles Babbage and Ada Lovelace dreamed of (but sadly, never built) in the 19th century. The deeper point was that a sufficiently powerful general-purpose computer, suitably programmed, can compute whatever the brain computes.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

AI was the search for that program, and the point of Turing’s Imitation Game, a thought experiment known nowadays as the Turing Test, was that when such a program can behave functionally like an intelligent human being, we should conclude that the computer (or the running program) is likewise intelligent.

In its usual form, the Turing Test simplifies things by restricting interaction to a chat window, but when one zooms out to consider a whole living body, not just a brain in a vat, this simplification no longer seems adequate. Evolutionarily speaking, the most basic function of an organism is not to send and receive text messages, but to reproduce. That is, its output is not just information, but a real-life copy of something like itself. How, von Neumann wondered, could a machine (in the broadest possible sense) reproduce? How, in other words, is life possible.

IV. Reproduction

Von Neumann imagined a machine made out of standardized parts, like LEGO bricks, paddling around on a reservoir where those parts could be found bobbing on the water.7 The machine’s job is to gather all the needed parts and construct another machine like itself. Of course, that’s exactly what a bacterium has to do in order to reproduce; in fact it’s what every cell must do in order to divide, and what every mother must do in order to give birth.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

On the face of it, making something as complex as you yourself are has a whiff of paradox, like lifting yourself up by your own bootstraps. However, von Neumann showed that it is not only possible, but straightforward, using a generalization of the Universal Turing Machine.

EVOLUTION THROUGH SYMBIOSIS: This animation shows, for a random selection of tapes in a particular soup of programming language bff, the provenance of each of the tape’s 64 bytes, beginning at interaction 10,000 and ending at interaction 10,000,000. Vertical lines in the beginning show bytes tracing their lineages to the original (random) values; diagonal lines show bytes increasingly copying themselves from one location to another. Around 2 million interactions, imperfect replicators begin competing, chaotically overwriting each other to create short-lived chimeras; then, at about 5.6 million interactions, a symbiotic whole-tape replicator suddenly emerges out of the chaos like a cat’s cradle, subsequently undergoing further evolutionary changes; but it will conserve elements of its original architecture indefinitely.

He envisioned a “machine A” that would read a tape containing sequential assembly instructions based on a limited catalog of parts, and carry them out, step by step. Then, there would be a “machine B” whose function is to copy the tape—assuming the tape itself is also made out of available parts. If instructions for building machines A and B are themselves encoded on the tape, then voilà—you have a replicator.

Instructions for building any additional non-reproductive machinery can also be encoded on the tape, so it’s even possible for a replicator to build something more complex than itself. A seed, or a fertilized egg, illustrate the point.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Remarkably, von Neumann described these requirements for a self-replicating machine before the discovery of DNA’s structure and function. Nonetheless, he got it exactly right. For life on Earth, DNA are the tape; DNA polymerase, which copies DNA, is “machine B”; and ribosomes, which build proteins by following the sequentially encoded instructions on DNA, are “machine A.” Ribosomes and DNA polymerase are made out of proteins whose sequences are, in turn, encoded in our DNA and manufactured by ribosomes. That is how life lifts itself up by its own bootstraps.

V. Equivalence

Although this is seldom fully appreciated, von Neumann’s insight established a profound link between life and computation. Machines A and B are Turing machines. They must execute instructions that affect their environment, and those instructions must run in a loop, starting at the beginning and finishing at the end. That requires branching, such as “if the next instruction is the codon CGA, then add an arginine to the protein under construction,” and “if the next instruction is UAG, then STOP.” It’s not a metaphor to call DNA a “program”—that is literally the case.

There are meaningful differences between biological computing and the kind of digital computing done by the ENIAC, or your smartphone. DNA is subtle and multilayered, including phenomena like epigenetics and gene proximity effects. Cellular DNA is nowhere near the whole story, either. Our bodies contain (and continually swap) countless bacteria and viruses, each running their own code. Biological computing is massively parallel; your cells have somewhere in the neighborhood of 300 quintillion ribosomes. All this biological computing is also noisy; every chemical reaction and self-assembly step is stochastic.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

It’s computing, nonetheless. There are, in fact, many classic algorithms in computer science that require randomness, which is why Turing insisted that the Ferranti Mark I, an early computer he helped to design in 1951, include a random number instruction. Randomness is thus a small but important extension to the original Turing Machine, though any computer can simulate it by computing deterministic but random-looking or “pseudorandom” numbers.

Parallelism, too, is increasingly fundamental to computer science. Modern AI, for instance, depends on both massive parallelism and randomness—as in the “stochastic gradient descent” algorithm, used for training most of today’s neural nets, and the “temperature” setting used in virtually all chatbots to introduce a degree of randomness into their output.

Randomness, massive parallelism, and subtle feedback effects all conspire to make it very, very hard to reason about, “program,” or “debug” biological computation by hand. (We’ll need AI help.) Still, we should keep in mind that Turing’s fundamental contribution was not the invention of any specific machine for computing, but a general theory of computation. Computing is computing, and all computers are, at bottom, equivalent.

Any function that can be computed by a biological system can be computed by a Turing Machine with a random number generator, and vice versa. Anything that can be done in parallel can also be done in series, though it might take a very long time. Indeed, much of the inefficiency in today’s artificial neural net-based AI lies in the fact that we’re still programming serial processors to loop sequentially over operations that brains do in parallel.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

VI. Artificial Life

Von Neumann’s insight shows that life depends on computation. Thus, in a universe whose physical laws did not allow for computation, it would be impossible for life to arise. Luckily, the physics of our universe do allow for computation, as proven by the fact that we can build computers—and that we’re here at all.

Now we’re in a position to ask: In a universe capable of computation, how often will life arise? Clearly, it happened here. Was it a miracle, an inevitability, or somewhere in between? A few collaborators and I set out to explore this question in late 2023.

Our first experiments used an esoteric programming language called (apologies) Brainfuck.8 While not as minimal as SUBLEQ, Brainfuck is both very simple and very similar to the original Turing Machine. Like a Turing Machine, it involves a read/write head that can step left or right along a tape.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

In our version, which we call “bff,” there’s a “soup” containing thousands of tapes, each of which includes both code and data. The tapes are of fixed length—64 bytes—and start off filled with random bytes. Then, they interact at random, over and over. In an interaction, two randomly selected tapes are stuck end to end, creating a 128-byte-long string, and this combined tape is run, potentially modifying itself. The 64-byte-long halves are then pulled back apart and dropped back into the soup. Once in a while, a byte value is randomized, as cosmic rays do to DNA.

After a few million interactions, something magical happens: The tapes begin to reproduce.

Since bff has only seven instructions, represented by the characters “< > + – , [ ]”, and there are 256 possible byte values, following random initialization only 2.7 percent of the bytes in a given tape will contain valid instructions; any non-instructions are skipped over. Thus, at first, not much comes of interactions between tapes. Once in a while, a valid instruction will modify a byte, and this modification will persist in the soup. On average, though, only a couple of computational operations take place per interaction, and usually, they have no effect. In other words, while computation is possible in this toy universe, very little of it actually takes place. When a byte is altered, it’s likely due to random mutation, and even when it’s caused by the execution of a valid instruction, the alteration is arbitrary and purposeless.

But after a few million interactions, something magical happens: The tapes begin to reproduce. As they spawn copies of themselves and each other, randomness gives way to complex order. The amount of computation taking place in each interaction skyrockets, since—remember—reproduction requires computation. Two of Brainfuck’s seven instructions, “[” and “],” are dedicated to conditional branching, and define loops in the code; reproduction requires at least one such loop (“copy bytes until done”), causing the number of instructions executed in an interaction to climb into the hundreds, at minimum.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

The code is no longer random, but obviously purposive, in the sense that its function can be analyzed and reverse-engineered. An unlucky mutation can break it, rendering it unable to reproduce. Over time, the code evolves clever strategies to increase its robustness to such damage. This emergence of function and purpose is just like what we see in organic life at every scale; it’s why, for instance, we’re able to talk about the function of the circulatory system, a kidney, or a mitochondrion, and how they can “fail”—even though nobody designed these systems.

We reproduced our basic result with a variety of other programming languages and environments. In one especially beautiful visualization, my colleague Alex Mordvintsev created a two-dimensional bff-like environment where each of a 200×200 array of “pixels” contains a tape, and interactions occur only between neighboring tapes on the grid. The tapes are interpreted as instructions for the iconic Zilog Z80 microprocessor, launched in 1976 and used in many 8-bit computers over the years (including the Sinclair ZX Spectrum, Osborne 1, and TRS-80). Here, too, complex replicators soon emerge out of the random interactions, evolving and spreading across the grid in successive waves.

VII. Thermodynamics

We don’t yet have an elegant mathematical proof of the sort Turing would have wanted, but our simulations suggest that, in general, life arises spontaneously whenever conditions permit. Those conditions seem quite minimal: a physical environment capable of supporting computation, a noise source, and enough time.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Replicators arise because an entity that reproduces is more dynamically stable than one that doesn’t. In other words, if we start with one tape that can reproduce and one that can’t, then at some later time, we’re likely to find many copies of the one that can reproduce, but we’re unlikely to find the other at all, because it will either have been degraded by noise or overwritten.

This implies an important generalization of thermodynamics, the branch of physics concerned with the statistical behavior of matter subject to random thermal fluctuations—that is, of all matter, since, above absolute zero, everything is subject to such randomness. The famous second law of thermodynamics tells us that, in a closed system, entropy will increase over time; that’s why, if you leave a shiny new push mower outside, its blades will gradually dull and oxidize, its paint will start to peel off, and in a few years, all that will be left is a high-entropy pile of rust.

To a physicist, life is weird, because it seems to run counter to the second law. Living things endure, grow, and can even become more complex over time, rather than degrading. There is no strict violation of thermodynamics here, for life can’t exist in a closed system—it requires an input of free energy—but the seemingly spontaneous emergence and complexification of living systems has seemed beyond the purview of physics.

It now seems clear, though, that by unifying thermodynamics with the theory of computation, we ought to be able to understand life as the predictable outcome of a statistical process, rather than regarding it uneasily as technically permitted, yet mysterious. Our artificial life experiments suggest that, when computation is possible, it will be a “dynamical attractor,” because replicating entities are more dynamically stable than non-replicating ones, and, as von Neumann showed, computation is required for replication.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

In our universe, that requires an energy source. This is because, in general, computation involves irreversible steps, and these consume free energy. Hence, the chips in our computers draw power and generate heat when they run. Life must draw power and generate heat for the same reason: because it is inherently computational.

VIII. Complexification

When we pick a tape out of the bff soup after a few million interactions, when replicators have taken over, we often see a level of complexity in the program on that tape that seems unnecessarily—even implausibly—high. A working replicator could consist of just a handful of instructions in a single loop, requiring a couple of hundred operations to run. Instead, we often see instructions filling up a majority of the 64 bytes, multiple and complex nested loops, and thousands of operations per interaction.

Where did all this complexity come from? It certainly doesn’t look like the result of simple Darwinian selection operating on the random text generated by a proverbial million monkeys typing on a million typewriters. In fact, such complexity emerges even with zero random mutation—that is, using only the initial randomness in the soup, which works out to a novella’s worth of gibberish. Hardly a million monkeys—and far too little to contain more than a few consecutive characters of working code.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Computers and cellphones are certainly purposive, or we wouldn’t talk about them as being buggy.

The answer recalls Margulis’ insight: the central role of symbiosis, rather than mere random mutation and selection, in evolution. When we look carefully at the quiescent period before tapes begin replicating, we notice a gradual, steady rise in the amount of computation taking place. This is due to the rapid emergence of imperfect replicators—very short bits of code that, in one way or another, have some nonzero probability of generating more code. Even if the code produced isn’t like the original, it’s still code, and only code can produce more code; non-code can’t produce anything!

Thus, there’s a selection process at work from the very beginning, wherein code begets code. This inherently creative, self-catalyzing process is far more important than random mutation in generating novelty. When bits of proliferating code combine to form a replicator, it’s a symbiotic event: By working together, these bits of code generate more code than they could separately, and the code they generate will in turn produce more code that does the same, eventually resulting in an exponential takeoff.

Moreover, after the takeoff of a fully functional tape replicator, we see further symbiotic events. Additional replicators can arise within a replicating tape, sometimes producing multiple copies of themselves with each interaction. In the presence of mutation, these extra replicators can even enter into symbiotic relationships with their “host,” conferring resistance to mutational damage.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

IX. Ecology

Fundamentally, life is code, and code is life. More precisely, individual computational instructions are the irreducible quanta of life—the minimal replicating set of entities, however immaterial and abstract they may seem, that come together to form bigger, more stable, and more complex replicators, in ever-ascending symbiotic cascades.

In the toy universe of bff, the elementary instructions are the seven special characters “< > + – , [ ]”. On the primordial sea floor, geothermally driven chemical reactions that could catalyze further chemical reactions may have played the same role. Our growing understanding of life as a self-reinforcing dynamical process boils down not to things, but to networks of mutually beneficial relationships. At every scale, life is an ecology.

Nowadays, we interact with computers constantly: the phones in our pockets and purses, our laptops and tablets, data centers, and AI models. Are they, too, alive?

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

They are certainly purposive, or we couldn’t talk about them being broken or buggy. But hardware and software are, in general, unable to reproduce, grow, heal, or evolve on their own, because engineers learned long ago that self-modifying code (like bff or DNA) is hard to understand and debug. Thus, phones don’t make baby phones, and apps don’t spontaneously generate new versions of themselves.

And yet: There are more phones in the world this year than last year; apps acquire new features, become obsolete, and eventually reach end-of-life, replaced by new ones; and AI models are improving from month to month. It certainly looks as if technology is reproducing and evolving!

If we zoom out, putting technology and humans in the frame together, we can see that this larger, symbiotic “us” is certainly reproducing, growing, and evolving. The emergence of technology, and the mutually beneficial (if, sometimes, fraught) relationship between people and tech, is nothing more or less than our own most recent major evolutionary transition. Technology, then, is not distinct from nature or biology, but merely its most recent evolutionary layer.

Lead image: Bruce Rolff / Shutterstock

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

References

1. Cech, T.R. The RNA worlds in context. Cold Spring Harbor Perspectives in Biology 47, a006742 (2012). 

2. Russell, M.J. & Martin, W. The rocky roots of the acetyl-CoA pathway. Trends in Biochemical Sciences 29, 358-363 (2004).

3. Szathmáry, E. & Smith, J.M. The major evolutionary transitions. Nature 374, 227-232 (1995).

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

4. Woese, C.R. On the evolution of cells. Proceedings of the National Academy of Sciences 99, 8742-8747 (2002).

5. Turing, A.M. On computable numbers, with an application to the Entscheidungsproblem. Proceedings of the London Mathematical Society s2-42, 230-265 (1937).

6. Von Neumann, J. First draft of a report on the EDVAC. University of Pennsylvania (1945).

7. Von Neumann, J. Theory of Self-Replicating Automata University of Illinois Press, Urbana, IL (1966).

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

8. Agüera y Arcas, B., et al. Computational life: How well-formed, self-replicating programs emerge from simple interaction. arXiv 2406.19108 (2024).

close-icon Enjoy unlimited Nautilus articles, ad-free, for less than $5/month. Join now

! There is not an active subscription associated with that email address.

Subscribe to continue reading.

You’ve read your 2 free articles this month. Access unlimited ad-free stories, including this one, by becoming a Nautilus member.

! There is not an active subscription associated with that email address.

This is your last free article.

Don’t limit your curiosity. Access unlimited ad-free stories like this one, and support independent journalism, by becoming a Nautilus member.