One of my favorite albeit heavily paraphrased quotes from Albert Einstein is his assertion that the most incomprehensible thing about the universe is that it is comprehensible. (What he actually said, in his 1936 work “Physics and Reality,” is more longwinded, and includes a digression into Immanuel Kant and the meaning of “comprehensibility,” but he does write “… the eternal mystery of the world is its comprehensibility.”) In truth, this statement holds back a little. The greater mystery is that the universe is actually capable of self-comprehension.
From a time nearly 14 billion years ago when all matter and energy existed in an exquisitely uniform and boring state, the cosmos has evolved to contain complex structures that—in at least one tiny spot in our solar system—have gained mysterious things like agency and consciousness that compel them to try to decode reality. In doing so they (meaning we) also produce interpreted versions of reality that they place in a “dataome.”
By dataome, I mean all of the data (and the information it contains) that we generate, utilize, and propagate but which is not encoded in our DNA. The dataome encompasses cave paintings to books, flash drives to cloud servers, and all the structures built in service of those things. We exist in an uneasy symbiosis with this dataome, whose interests may not always align with ours even though the information it carries for us is critical for our evolutionary success. That includes the information we create describing our experience of reality itself.
Every equation of physics or every computer simulation of how planets, stars, and galaxies orbit and evolve, is a bizarre imprint of an interpretation of the universe by the universe, built into the universe by the rearrangement of its atoms into a dataome. But there’s an even deeper perspective: Was all of this really inevitable? Did we ever have a choice in creating a dataome or doing any of the things we do, and does any self-aware entity in the universe have a choice either?
In a wonderfully lively, and extraordinarily ideas-dense, near 70-page long 2013 essay titled “The Ghost in the Quantum Turing Machine,” the theoretical computer scientist Scott Aaronson goes deep in search of arguments for and against such free will. It’s such fun that I want to spend some time with it here. He points out that many of us conflate the idea of random unpredictability with free will. For example, I can feel like I’m exerting free will if I, well, I don’t know, spontaneously write the word “sponge” here. It certainly seems entirely random.
That, Aaronson argues, is probably not right because what we call randomness actually follows well-defined statistical rules of probability, and in that sense is never “free.” Its unpredictability is predictable. By contrast there is a class of unpredictable phenomena that can’t be measured by random probabilities; they have a different form of unpredictability. This is described by a property called Knightian uncertainty after one Frank Knight, an economist working on these ideas in the 1920s. In modern vernacular this is very much like the “black swan event” idea popularized skillfully by the writer and mathematical thinker Nassim Taleb. A black swan event is extremely rare, impacts the world greatly, and has explanations invented for it after the fact. But if that event or behavior can’t ever be objectively quantified by probabilities it’s likely in the category of Knightian uncertainty.
There is no neat and tidy probabilistic solution. It will never be known why the chicken crossed the road.
Here’s an example based off of Aaronson’s explanation: Imagine that a computer program generates random numbers as part of its operation. Perhaps it’s picking random color mixtures for its screen-saver. But if it picks the number 669988 there is a bug in its code that will cause it to crash. The original programmer knew this, but since 669988 is merely one choice out of a million possibilities for this six-digit number, they decided those were acceptable odds.
However, what if the code instead asks a human to provide a random six-digit number? The programmer cannot possibly know how likely it is for 669988 to be input. It could be a person’s lucky number, there could be some weird human predisposition to these digits. Instead of being predictably unpredictable it is simply unpredictable, and cannot be described by straightforward mathematical probabilities. Instead it reflects the free will of a human being.
But if you are a physicist (or a proper philosopher) you might pick a fight with this. That’s because, you’d say, what a human does at any moment is ultimately a consequence of a very long, very complex, chain of events. Each of those events can be broken down to individual interactions and occurrences of atoms and electrons, photons, and laws that—even if probabilistic—do still describe all options at all times; they’re all predictably unpredictable. And that includes things like quantum uncertainty. Surely we can always explain a human action, or anything else, by simply going far enough down this chain of random things. In this case there is no genuine free will; no real Knightian uncertainty in the base pieces of reality.
Aaronson argues that if the very earliest (quantum) state of the universe has Knightian uncertainty then things are more interesting. The precise state of the new universe need not be determined by the statistical rules of randomness. It could be just as weirdly unpredictable as the previous example of someone perversely guessing the code-crashing number. In this case the information that describes that state—and subsequently all states that the universe will take on, including all of its atoms, us, and any aliens—can be considered (in Aaronson’s terminology) as being made of “freebits.” And freebits are kind of like the last word in cosmic choice.
These freebits also have to be quantum in nature. That means they are also “qubits”—the version of plain old 1 and 0 bits that applies to objects and systems exhibiting quantum behavior. They are fuzzy, undetermined things until called upon and snapped into focus. That’s a complication that I’m going to avoid really dealing with, because it will really make our heads hurt. Luckily, to get a sense of where freebits lead us doesn’t require knowing all of those details.
The story to pay attention to is simply that these freebits could stick around throughout the history of the universe. Or, to turn this the other way: Suppose you want to track back the chain of events that led to a specific incident—something interesting in a physics experiment, or a chicken crossing the road. For some incidents there will be a chain that goes all the way back to the original freebits. And because those freebits obey Knightian uncertainty it means that there is no ultimate answer for why you saw what you saw, no neat and tidy final, probabilistic solution. It will never, ever be known why the chicken crossed the road.
Randomness follows well-defined statistical rules of probability, and in that sense is never “free.”
That could, perhaps, also apply to structures like the human brain and its thoughts. If we could disentangle the untold quadrillions of molecular and atomic interactions and chained events in a brain, and the ever-so-subtle nudges of quantum uncertainty here and there, we might find that it all leads back to the original freebits, thereby restoring some kind of free will to ourselves. I’m not suggesting any sort of daft mystical quantum-brain connection; this is all just physics (well, all physics-at-the-boundary-with-philosophy). But it could well be that your spontaneous decision to place an unsuspecting chicken at the roadside is truly Knightian, with a lineage going all the way back to the Big Bang.
Yet, this also implies that there are only so many ways that anything can happen in the cosmos, only so many ways that history could unfurl. It’s rather like taking a cross-country road trip from one continental coastline to another: You only have so many places you can start from, and each will influence where you end up. Is the universe truly open-ended in its capacity to generate informational novelty? Perhaps not entirely.
You might, if you’ve survived reading this far, be wondering how many freebits we’re talking about. After all, the knowable universe is big but decidedly finite. We can only ever observe the realm of the cosmos from which light has had time to reach us since the Big Bang some 13.8 billion years ago. This is tricky, but we can actually estimate the maximum number of any kind of bits (not just freebits) in the observable universe as approximately ten-to-the-power-of-122 (or 10122). The implication is that this is the limit of the number of interesting things that can ever happen in the universe. No do-overs, no extras, this is it.
But this also means that freebits, and bits, are getting “used up” over time. Indeed, they must be for events to occur. And this brings us full-circle back to the classical physics ideas of the laws of thermodynamics and entropy, and the Landauer limit on energy needed to erase bits. Storing and accessing information means using energy. But if you use energy you have to maintain or increase the entropy of the cosmos (generally speaking). If there are a finite number of bits in all of reality, even if a huge number like 10122, then eventually the universe runs out of ways to change its entropy, and its bits.
At this point the story connects to the far, far, far cosmic future in which everything is in thermal equilibrium: Space is at the same temperature, everywhere. There are no hot and cold spots, no ways for energy to flow from warm things to chilly things. No more bits to flip and the universe ends up as a tepid bath, full of nothing but regrets. (Although regrets imply information, and there would be no way to access that at this late stage).
Is any of this a valid description of the world that has been and is to come? We don’t really know, although our best bet is that the ever-expanding universe is indeed heading to eventual boredom in thermal uniformity. Concepts like freebits are, for now, merely intriguing proposals about what makes reality tick under the surface.
The essential point to all of this is that information shows itself to be more than one might expect. It isn’t just a way to probe the fundamentals of nature; it may be part of the fundamentals. Consequently, the fact that the human dataome is becoming increasingly entwined with the fabric of the universe—as pieces of manipulated matter and energy—means that we (as living things) are fully committed to the universal drive toward that future ocean of unchanging, equilibrated spacetime. It is as if we popped out of the vacuum as a temporary fluctuation of energy, and we’ve been clawing our way back ever since.
Read Caleb Scharf’s “3 Greatest Revelations” while writing his new book here.
Caleb Scharf is the author of The Ascent of Information. He is an astrophysicist, the director of astrobiology at Columbia University in New York, and a founder of yhousenyc.org, an institute that studies human and machine consciousness. His previous books include The Zoomable Universe: An Epic Tour Through Cosmic Scale, from Almost Everything to Nearly Nothing.
From The Ascent of Information: Books, Bits, Genes, Machines, and Life’s Unending Algorithm by Caleb Scharf. Published by arrangement with Riverhead, a member of Penguin Random House LLC. Copyright © Caleb Scharf 2021.
Lead image: Nina Ezhik / Shutterstock