ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. or Join now .
Sign up for the free Nautilus newsletter:
science and culture for people who love beautiful writing.
NL – Article speedbump

Was it a chance encounter when you met that special someone or was there some deeper reason for it? What about that strange dream last night—was that just the random ramblings of the synapses of your brain or did it reveal something deep about your unconscious? Perhaps the dream was trying to tell you something about your future. Perhaps not. Did the fact that a close relative developed a virulent form of cancer have profound meaning or was it simply a consequence of a random mutation of his DNA?

Nautilus Members enjoy an ad-free experience. Log in or Join now .

We live our lives thinking about the patterns of events that happen around us. We ask ourselves whether they are simply random, or if there is some reason for them that is uniquely true and deep. As a mathematician, I often turn to numbers and theorems to gain insight into questions like these. As it happens, I learned something about the search for meaning among patterns in life from one of the deepest theorems in mathematical logic. That theorem, simply put, shows that there is no way to know, even in principle, if an explanation for a pattern is the deepest or most interesting explanation there is. Just as in life, the search for meaning in mathematics knows no bounds.

Pixabay

First, some preliminaries. Consider the following three strings of characters:

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

1. 100100100100100100100100100100100100100100100100100100100100100100100

2. 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97

3. 38386274868783254735796801834682918987459817087106701409581980418.

How can we describe these strings? We can easily describe them by just writing them down as we just did. However, it is pretty obvious that there are shorter descriptions of the first two strings. The first is simply the pattern “100” over and over. The second pattern is simply a listing of the first few prime numbers. What about the third string? We can describe it by just printing the string. But is there a better, shorter description?

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Randomness is unnerving and so we search for a pattern that eliminates some of the chaos.

In the early 1960s, an American teenager named Gregory Chaitin, the world famous Russian mathematician Andrey Kolmogorov, and the computer science pioneer Ray Solomonoff independently formulated a way of measuring the complexity of strings of characters. Their ideas have come to be called Kolmogorov Complexity Theory or Algorithmic Information Theory. They posited that a string is as complex as the length of the shortest computer program that can produce the string. That is, take a string and look for a short computer program that produces that string. The program is a type of description of the string. If the shortest such program is very short, then the string has a simple pattern and is not very complex. We say that string “has little algorithmic content.” In contrast, if a long program is required to produce the string, then the string is complicated and “has more algorithmic content.” For any string, one must look for the shortest program that produces that string. The length of that program is called the Kolmogorov complexity of the string.

Let us look at the above three strings. The first two strings can be described by relatively short computer programs:

1. Print “100” 30 times.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

2. Print the first 25 prime numbers.

The Kolmogorov complexity of the first string is less than the Kolmogorov complexity of the second string because the first program is shorter than the second program. What about the third string? There is no obvious pattern for this string. Nevertheless, there exists a silly program that prints this sequence:

3. Print “38386274868783254735796801834682918987459817087106701409581980418”

While this program will do the job, it is not very satisfying. Perhaps there is a shorter program that shows the string has a pattern. When the shortest program to produce a string is simply “Print the string,” we say that the string is very complicated and there is no known pattern. A string that lacks any pattern is called random. While we do not see any pattern, there could still be one. In mathematics, like in life, we are confronted with many seemingly random patterns.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

We might try to use the amazing powers of modern computers to find a pattern and a shortest program. Wouldn’t it be lovely if there were a computer that would simply calculate the Kolmogorov complexity of any string? This computer would accept a string as input, and output the length of the shortest program that can produce that string. Surely, with all the newfangled computer tools like AI, deep learning, big data, quantum computing, etc., it would be easy to create such a computer.

Alas, no such computer can exist! As powerful as modern computers are, this task cannot be accomplished. This is the content of one of the deepest theorems in mathematical logic. Basically, the theorem says that the Kolmogorov complexity of a string cannot be computed. There is no mechanical device to determine the size of the smallest program that produces a given string. It is not that our current level of computer technology is insufficient for the task at hand, or that we are not clever enough to write the algorithm. Rather, it was proven that the very notion of description and computation shows that no such computer can ever possibly perform the task for every string. While a computer might find some pattern in a string, it cannot find the best pattern. We might find some short program that outputs a certain pattern, but there could exist an even shorter program. We will never know.

The proof that the Kolmogorov complexity of a sequence is not computable is a bit technical. But it is a proof by contradiction, and we can get a sense of how that works by looking at two cute little paradoxes.

The interesting number paradox revolves around the claim that all natural numbers are interesting. 1 is the first number, so that is interesting. 2 is the first even number. 3 is the first odd prime number. 4 is interesting because 4=2×2 and 4=2+2. We can continue in this fashion and find interesting properties for many numbers. At some point we might come to some number that does not seem to have an interesting property. We can call that number the first uninteresting number. But that, in itself, is an interesting property. In conclusion, the uninteresting number is, in fact, interesting!

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

We will simply never know if the pattern that we have found is the best one.

The ideas inside the Kolmogorov proof are also similar to those in the Berry paradox, which is about describing large numbers. Notice that the more words you use, the larger the number you can describe. For example in three words you can describe “a trillion trillion” while in five words you can describe “a trillion trillion trillion trillion” which is much larger. Now consider the number described by the following phrase:

“The smallest number that cannot be described in less than 15 words.”

This number needs 15, or 16, or even more words to describe it. It cannot be described by 12 words, or 13 words, or 14 words. However there is a major problem: The above phrase described the number in only 12 words. Our description of the number violated the description of the number. This is a contradiction.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

In both the interesting number paradox and the Berry paradox, we arrive at contradictions by assuming there is an exact way of describing something. Similarly, the proof that Kolmogorov complexity is not computable springs from the fact that if it was, we would find a contradiction.

The fact that Kolmogorov complexity is not computable is a result in pure mathematics and we should never confuse that pristine realm with the far more complicated, and messy, real world. However, there are certain common themes about Kolmogorov complexity theory that we might take with us when thinking about the real world.

Many times we are presented with something that looks totally chaotic. This randomness is unnerving and so we search for a pattern that eliminates some of the chaos. If we do find a pattern, it is not clear that it is the best pattern to explain what we see. We might ask ourselves if there exists a deeper pattern that provides a better explanation. What Kolmogorov complexity theory teaches is that, at the deepest level, there is no sure way to determine the best pattern. We will simply never know if the pattern that we have found is the best one.

But that makes the search eternally interesting. By definition, something is interesting if it demands more thought. A fact that is obvious and totally understood does not require further thought. The fact that 6 times 7 is 42 is totally comprehensible and uninteresting. It’s when we are not certain about ideas that we need to confirm them and think about them. The search for better patterns will always be interesting.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

We want to know that there is some meaning, purpose, and significance in the world around us.

There is an added complexity in the real world. Whereas in the world of strings and computer programs there are no mistakes, in the real world we can, and do, make mistakes. We can easily see if a certain program prints out a string or not. While we might not be able to determine the optimal program to print a certain string, we can determine if the program prints the required string. In contrast, the real world is much more complicated. We can think we recognize a pattern when, in fact, we are mistaken.

Now our understanding of our search for meaning is starting to come together. We abhor randomness and love patterns. We are biologically programmed to find some patterns that explain what they see. But we can never be certain that the pattern we’ve identified is the right one. Even if we could somehow be assured that we haven’t made a mistake, and we are exhibiting a computer-like perfection, there may always still be a deeper truth to unearth. This tension helps drive our love of literature, theater, and the cinema. When we read a novel, or watch a play, the author or director is presenting us with a sequence of events that has a common theme, pattern, or moral. Literature, plays, and the cinema offer us a delightful escape from the usual unintelligible, meaningless chaos that we find in the real world around us. Really good literature goes further, and leaves us with the possibility of many interpretations. We come face to face with the incomputability of the Kolmogorov complexity.

This tension also defines how we engage with our own lives. While we travel through the seemingly random events in our life, we are searching for patterns, and structure. Life is full of “ups and downs.” There are the joys of falling in love, giggling with your child, and feeling a sense of great accomplishment when a hard job is completed. There is also the pain of a crumbling relationship, or the agony of failing at a task after great effort, or the tragedy of the death of a loved one. We try to make sense of all this. We abhor the feeling of total randomness and the idea that we are just following chaotic, habitual laws of physics. We want to know that there is some meaning, purpose, and significance in the world around us. We want a magical story of a life, so we tell ourselves stories.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Sometimes the stories are simply false. Sometimes we lie to ourselves and those around us. And sometimes the patterns we identify are correct. But even if the story is correct, it is not necessarily the best one. We can never know if there is a deeper story that is more exact. As we age and suffer from ennui, we gain certain insights about the universe that we did not see before. We find better patterns. Maybe we get to see things more clearly. Or maybe not. We will never know. But we do know that the search is guaranteed to never end.

Noson S. Yanofsky has a Ph.D. in mathematics from The Graduate Center of The City University of New York. He is a professor of computer science at Brooklyn College of The City University of New York. In addition to writing research papers he has co-authored Quantum Computing for Computer Scientists and authored The Outer Limits of Reason: What Science, Mathematics, and Logic Cannot Tell Us. Noson lives in Brooklyn with his wife and four children.

close-icon Enjoy unlimited Nautilus articles, ad-free, for less than $5/month. Join now

! There is not an active subscription associated with that email address.

Join to continue reading.

You’ve read your 2 free articles this month. Access unlimited ad-free stories, including this one, by becoming a Nautilus member — 25% off for a limited time during our seasonal sale.

! There is not an active subscription associated with that email address.

This is your last free article.

Don’t limit your curiosity. Access unlimited ad-free stories like this one, and support independent journalism, by becoming a Nautilus member — 25% off for a limited time during our seasonal sale.