ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. or Join now .
Sign up for the free Nautilus newsletter:
science and culture for people who love beautiful writing.
NL – Article speedbump

A few years back, 12 million of us clicked over to watch the “Pachelbel Rant” on YouTube. You might remember it. Strumming repetitive chords on his guitar, comedian Rob Paravonian confessed that when he was a cellist, he couldn’t stand the Pachelbel Canon in D. “It’s eight quarter notes that we repeated over and over again. They are as follows: D-A-B-F♯-G-D-G-A.” Pachelbel made the poor cellos play this sequence 54 times, but that wasn’t the real problem. Before the end of his rant, Paravonian showed how this same basic sequence has been used everywhere from pop (Vitamin C: “Graduation”) to punk (Green Day: “Basket Case”) to rock (The Beatles: “Let It Be”).

Nautilus Members enjoy an ad-free experience. Log in or Join now .

This rant emphasized what music geeks already knew—that musical structures are constantly reused, often to produce startlingly different effects. The same is true of mathematical structures in physical theories, which are used and reused to tell wildly dissimilar stories about the physical world. Scientists construct theories for one phenomena, then bend pitches and stretch beats to reveal a music whose progressions are synced, underneath it all, in the heart of the mathematical deep.

Eugene Wigner suggested a half-century ago that this “unreasonable effectiveness” of mathematics in the natural sciences was “something bordering on the mysterious,” but I’d like to suggest that reality may be more mundane. Physicists use whatever math tools they’re able to find to work on whatever problems they’re able to solve. When a new song comes on, there’s bound to be some overlap in the transcription. These overlaps help to bridge mutations of theory as we work our way toward a lead sheet for that universal hum.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Superconductors to the Higgs Field

At the atomic level, modern physics breaks apart into three underlying forces. The strong force cements the nucleus together, overcoming repulsion from like charges. Electromagnetism holds electrons in place, and, occasionally, the weak force causes radioactive nuclei to split apart. But the differences among them held a question: Why, among the three, is the weak force so weirdly wimpy?

In 1941, theorist Julian Schwinger proposed an answer. He suggested that the weak force’s mediating particle—the W, for “weak”—might function like a massive, electrically charged version of the photon, which has neither mass nor charge. Its large mass would restrict the number that were produced, which would make the interaction seem weak, regardless of its actual strength. To sweeten the deal, the W’s electrical charge hinted at a link between weak and electromagnetic forces.1

But this unification of forces faced a fundamental challenge. The most straightforward models marrying the electromagnetic and weak forces into a so-called “electroweak” force were ones in which the symmetry of the theory was “spontaneously broken.” Practically, this meant that physicists would build mathematical theories in which the electromagnetic and weak forces started on an equal footing. Then they would introduce changes in the structure of their theories to make the forces exactly as unequal as they are observed to be. Unfortunately, the mathematics of field theory (specifically, Goldstone’s Theorem) required these sorts of changes to be accompanied by the production of massless particles—particles that, if they existed, should already have been observed, but had not.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Suppose you were a prisoner in a room with a strong fan blowing continually east to west.

This was where Philip Anderson, the American master of condensed matter theory, had a crucial insight. He observed that the dominant theory of superconductors (the “BCS theory” after the initials of its inventors) gave the photon a mass, even though the photon didn’t have any mass in the underlying electromagnetic theory. The BCS theory did break a symmetry, but it didn’t produce any additional massless particles.

Why not? The loophole was that the BCS broke symmetries contextually, inside the superconductor, not as a feature of the underlying electromagnetic theory. Consider an analogy. Suppose you were a prisoner in a room with a strong fan blowing continually east to west. Also suppose, having the time, you decided to reconstruct the laws of physics from scratch. Due to the blowing fan, your Prison Laws of Physics might break the east/west symmetry, even if the laws that apply elsewhere—Newton’s Laws, say—have no such broken symmetry. And just as the symmetries of Newton’s Laws won’t all apply inside your cell, the symmetries of electromagnetic theory won’t all apply inside a superconductor.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Anderson’s insight into symmetry breaking would lead Peter Higgs to hypothesize that the vacuum itself might break the symmetries of the electroweak theory. To do this, Higgs introduced an additional field, now known as the “Higgs field,” with a non-zero size at every point in space. This Higgs field would break the symmetries of the electroweak theory contextually, all throughout the universe.

The introduction of an omnipresent physical field to fix a mathematical problem borders unnervingly on the mysterious. But when CERN researchers announced that they’d found a particle (the famous “Higgs boson”) very much like the one associated with Higgs’ theoretical field, it was time to concede that the math had been unreasonably effective, once again.

Temperature to Particle Physics

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

The temperature of a gas, we now know, is related to the average kinetic energy of the particles that make it up. But this average doesn’t tell us how the total energy is distributed among the particles in the gas. Averages don’t distinguish among the possibilities of one particle having all the energy, every particle having the same energy, and so on. Without any extra assumptions, it’s impossible to say more.

In the second half of the 19th century, Ludwig Boltzmann added one. He insisted—as a postulate—that every possible arrangement of energy is equally likely. This doesn’t mean that every distribution of energy will occur equally often. It would be very unlikely, for instance, if all the energy were stored in a single particle, since there are many more ways for energy to be spread out among the particles than there are ways for it to be given all to one. This allowed him to discover which distributions of energy were most likely just by counting how many ways it could be divvied up among the particles with the same overall effect, when looked at from scales where individual particles can’t be distinguished.

Fourier misstated the “theorem” that made him famous, leading to a century’s worth of confusion.

The mathematical machinery used to calculate statistical properties from this idea is called the partition function: Z, for the German word Zustandssume, the “sum over states.” Since Z adds up the contributions of every possible state, every important statistical property of a system (pressure, temperature, etc.) can be found with various mathematical operations on it. The partition function revolutionized thermodynamics.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Surprisingly, Z would pop up again nearly a century later—this time, in the physics of fundamental particles rather than temperature and pressure. Just as statistical mechanics allowed energy to take any arrangement among particles, the American physicist Richard Feynman suggested that particles themselves might be able to take literally any path from one spot to another. By adding up the contributions of all the paths and following a weighting procedure to tell which paths were more or less likely than the others, Z made the jump from equilibrium physics to quantum dynamics.

Since we live in a quantum universe, this clever formalism for dealing with the statistics of many-particle systems took its place at the center of modern physics. The same structures that were provisional and statistical in the time of Boltzmann were fundamental by the time of Higgs—and continue to be a fundamental representation for quantum theories even today.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Heat to Quantum Uncertainty

Never has a math tool wended its way through thickets of interpretation more variously than the Fourier series. It was invented by Jean-Baptiste Fourier, applied mathematician and on-again off-again frenemy of Napoleon Bonaparte, to aid in studies of heat diffusion in metal plates. The real history had some hiccups—Fourier misstated the “theorem” that made him famous, leading to a century’s worth of confusion—but let’s not get distracted. The principal importance of Fourier’s inquiries was to show that any mathematical function that eventually repeats itself (any “periodic” function) can be represented by an infinite number of sine and cosine terms, added together. The Fourier series just tells you how much to weight each term in that sum.

It might seem impractical to go from one single function to an infinite sum, and sometimes it is. The only reason it sometimes isn’t is that many simple physical models are easy to solve if they have a sinusoidal input. Taking the Fourier transform of a function can be helpful if it turns one problem you can’t solve into infinitely many problems you can solve straightaway.

In the quantum picture of matter, the position of a particle is described with waves. Per Fourier, a wave localized in a specific position will require a greater spread of frequencies to describe it. Because the speed of the particle, according to quantum mechanics, is proportional to its frequency, this means that the more precisely you localize the position of a particle, the less precisely you know its speed.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

This, however, is just the most famous version of Heisenberg’s Uncertainty Principle—a fact, it turns out, that applies just as well to classical waves as it does to quantum particles. Keep in mind that this all relates to a mathematical technique that was developed to solve how heat moves in metal plates. Now, its descendants are used not only for quantum mechanics, but also for MP3 files, image compression, chemical spectroscopy… well, a tally of mutations could take a while. Long enough, perhaps, to bring us back to that original notion of scientists as oddball musicians, plugging away at one sequence after another until they hit upon a groove that works.

David Kordahl is a freelance writer and physics teacher who lives in Tempe, Arizona. 

close-icon Enjoy unlimited Nautilus articles, ad-free, for less than $5/month. Join now

! There is not an active subscription associated with that email address.

Join to continue reading.

You’ve read your 2 free articles this month. Access unlimited ad-free stories, including this one, by becoming a Nautilus member — 25% off for a limited time during our seasonal sale.

! There is not an active subscription associated with that email address.

This is your last free article.

Don’t limit your curiosity. Access unlimited ad-free stories like this one, and support independent journalism, by becoming a Nautilus member — 25% off for a limited time during our seasonal sale.