ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. or Join now .
Sign up for the free Nautilus newsletter:
science and culture for people who love beautiful writing.
NL – Article speedbump

In engineering,
uncertainty is usually as welcome as sand in a salad. The development
of digital technologies, from the alphabet to the DVD, has been
driven in large part by the desire to eliminate random fluctuations,
or noise, inherent in analog systems like speech or VHS tapes. But
randomness also has a special ability to make some systems work
better. Here are five cases where a little chaos is a critical part
of the plan:

Nautilus Members enjoy an ad-free experience. Log in or Join now .

Stochastic
Resonance

Scientists who make
sensitive detectors often go to extreme lengths to eliminate noise.
If they are trying to spot neutrinos, for example, they’ll build their detector at the bottom of a mine
to stop the results from being swamped
by regular cosmic radiation. But there
are times when adding noise is the only way
to pick up a weak periodic signal.

This
phenomenon is called stochastic resonance, and it works something
like this: Imagine you’re trying to count the number of waves at
the seashore, and your detector is a wall built across the middle of
a beach. The height of the wall represents the threshold of
detection: Only if water washes over the top of the wall will it be
registered. But our imaginary wall is high enough that the swell of
the water never quite rises to the top of the wall. Adding noise is
like adding some rapidly changing wind—it whips up waves in a
random pattern. With the right amount and right variation of wind,
when the wave comes in the water will splash over the top of the wall
and be detected. If there’s too little wind, the calmer waves will
never make it over the top; too much wind and the water level may
stay over the wall for long stretches, drowning out the signal of the
waves.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

There are times when adding noise is the only way to pick up a weak periodic signal.

And stochastic
resonance doesn’t just apply to scientific instruments: There’s
evidence that our own nervous systems use it to detect signals between cells,
and that it also plays a role in our perception
of sight, touch, and hearing. For example, the balance of elderly
people can be improved by fitting their shoes with insoles that produce “noisy” vibrations
below the threshold of sensation. This improves the seniors’ sense
of touch in their feet, which leads to better balance. Researchers
believe this works because the sub-threshold stimulation primes
sensory neurons to fire when a foot contacts the floor. The
stimulation has to be somewhat random because otherwise the sensory
neurons would adapt to, and ultimately ignore, the additional
stimulation.

Cryptography

Codes
and ciphers are a case where being predictable can literally get you
killed. The goal of cryptography is to turn a message—the
“plaintext”—into a meaningless jumble—the “ciphertext.”
Ideally, the ciphertext should be indistinguishable from a random
string of letters or numbers: If code-breakers discern any pattern in
the ciphertext, they can use it to help reveal the plaintext.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

For example, during
World War II, Germany relied on a code machine called Enigma. An
operator would push a button on its keyboard, and a letter on a panel
would light up, as determined by a system of rotating wheels inside.
Crucially for the Allies, the setup was such that a letter couldn’t
be encrypted as itself; that is, a “b” could be encoded as any
letter except “b.” This might sound like a good thing—shouldn’t
all the ciphertext be completely different from the plaintext? But in
fact, it was a critical weakness, reducing the number of
possibilities code-breakers had to consider.

Modern cryptography
encrypts messages by combining plaintext with randomly generated
digital keys using various algorithms. The security of the system
depends on the algorithm chosen, the length of the keys, and the keys
being truly random. The algorithms and keys used today are so good
that it should take longer than the current age of the universe
to break a properly encrypted message. Nonetheless, some
security-conscious individuals and organizations are worried that new
code-breaking techniques may be found. Consequently, researchers have created and deployed some quantum encryption systems,
which rely on fundamentally random subatomic processes and, in
theory, can never
be broken.

Genetic
Engineering

Evolution created a
noise-resistant digital code to store the blueprints of life: DNA,
with its four-letter alphabet. DNA allows organisms to replicate a
single cell over and over to make an entire human body composed of
trillions of cells, each with an identical genome. Our cells even
have elaborate systems to repair and correct damaged DNA.
Consequently, although these systems can break down, as in cancer,
for any given cell the chances of a DNA mutation are low. Also as in
cancer, most mutations are likely to have a negative, or at best
neutral, impact on the cell’s functioning. This is a problem for
genetic engineers who want to produce lots of mutated cells quickly
so they can find the rare variation that’s useful, like a corn cob
with bigger kernels.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

So they rely on
mutagens. Mutagens are factors that jumble DNA, and there are a huge
variety to choose from, depending on the organism and the degree of jumbling
that is desired. Exposure to gamma rays—the same thing that turned
mild-mannered scientist Bruce Banner into
the Hulk—is
one popular mutagen, and even caffeine can do the trick, especially
when working with bacteria or fungi. Fortunately for the enormous
number of people for whom caffeine is one of the major food groups,
its mutating powers are confined to cells in petri dishes.

Gambling

Gaming operators
must walk a fine line. To keep
players interested—and law enforcement agencies uninterested—their
games must be fair. But they must also be guaranteed to
generate a profit in the long run. Casinos need to know just how
likely a 21 in blackjack or a red 32 in roulette is, so they can set
appropriate payouts (or, when it comes to virtual games like
electronic slot machines, what the odds of three cherries turning up
should be).

This requires being
able to calculate odds with precision. So it probably shouldn’t be
a surprise that our modern mathematical understanding of probability
came about largely because of gaming. Antoine Gombauld was a
17th-century gambler who had a friend in one of the greatest geniuses
of all time, Blaise Pascal. (Among many other contributions, Pascal
invented the first mechanical calculator when he was 19 years old.)
Gombauld was trying to figure out the correct odds of throwing two
sixes with a pair of dice, and asked Pascal for help.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

To keep players interested—and law enforcement agencies uninterested—their games must be fair.

Corresponding with
Pierre de Fermat (best known for his Last Theorem),
Pascal worked out ways to calculate probabilities without having to
tally up every possible outcome of a game one by one, something that
quickly becomes cumbersome when the games get complicated. This work
was the foundation of what’s now called probability theory,
and is used to understand all kinds of complex phenomena, from the
stock market to quantum physics.

Pascal’s analysis
highlights that one reason gambling is so lucrative is that our
intuitive understanding of the likelihood of a random event is often
quite wrong. Imagine tossing a fair coin 10 times and, by chance,
getting 10 tails in a row. Now, how much would you bet that you’re
going to get another tails on your next toss? Some people think that
because getting 10 tails in a row was already unlikely, an 11th
tails must be extraordinarily unlikely—that heads is “due.”
Other people would believe tails is on an unstoppable lucky streak,
so the chances of an 11th straight tails are great. But
Pascal showed us that that the odds in this case are, in fact,
exactly 50/50. The coin doesn’t “remember” what’s gone
before. But players remember, and they tend to believe that
their luck or instincts can outsmart randomness, so they under- or
overestimate their chances. The result is that a casino can be
completely upfront about the odds in their games, which are a lot
poorer than 50/50, and still have a steady stream of players willing
to put money down: The gaming industry raked in $430 billion in 2012,
according to the analyst firm Global Betting Gaming Consultants
(GBGC).

Computer
Simulations

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Systems like
hurricanes or stock markets are hard to predict because they are so
complex. An analyst creates a computer model of the system she’s
trying to understand, feeds in a description of current conditions,
and lets the simulation evolve. Unfortunately, a lot of
approximations must be made: Only so many wind-speed measurements can
be taken, and no one can read the mind—or stock-trading program—of
every trader.

So
the analyst is left with a big question mark about how far she can
trust the simulation: If she’d happened to choose slightly
different starting approximations, would she have gotten radically
different predictions? The way to reduce this uncertainty is the Monte Carlo method,
named after the casino of the same name. The analyst runs the
simulation hundreds or thousands of times, with the initial
conditions randomly adjusted each time. Then she looks at the
collection of predictions. If 90 percent of weather forecast
simulations show a storm tracking straight up the East Coast, it’s
probably time to batten down the hatches.  

Generating
Randomness

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Creating true
randomness is a lot harder than thinking of a number between one and
10. Humans can’t be trusted to do it reliably. We mistake the
inevitable coincidences that arise in truly random sequences—such
as the same digit appearing three times in a row—as evidence of a
pattern. But by avoiding these coincidences we make sequences more
predictable.

But don’t feel
bad—computers are terrible at producing random numbers, too. This
is because they are digital systems ruled by logic—every number the
computer generates is in some way based on other numbers in its
memory. On its own, a computer can’t generate a truly random
number.

So when it’s
critical that a computer use truly random numbers, an external source
of noise must be used. These sources can include having the user
jiggle their mouse around, or even odd approaches like pointing a digital camera at a lava lamp.
This is often impractical, so computer scientists invented algorithms that produce pseudo-random numbers,
which are close enough to truly random for most purposes. The
algorithms start with a so-called seed and then generate a sequence
from that. Seeds are usually relatively small numbers, so programs
can either ask users to pick one, or can choose them by looking at something like the computer’s internal clock.

It’s important to
statistically test these algorithms: Some poor random number
generators don’t produce numbers that are evenly distributed over
the possible range of numbers, which can, for example, bias the
outcome of Monte Carlo simulations that rely on having a fair sample
of inputs. Other poor generators have been known to produce numbers
that are easily predictable: In 2003, geological statistician Mohan
Srivastava worked out how to identify winning scratch tickets from
the Ontario Lottery thanks to a pattern in the visible numbers
printed on the ticket.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

The Color of
Noise

Engineers and
scientists refer to all sorts of randomness in a system as “noise,”
because that’s literally what it was—the audible pops, hisses,
and buzzes that interfered with messages sent via the early
electronic systems of telegraphs, radios, and telephones. Hence
telecommunications companies quickly developed a keen interest in
understanding noise and finding ways to reduce it, most famously at
AT&T’s Bell Labs in New Jersey. There, in 1948, Claude Shannon
published “A Mathematical Theory of Communication,”
founding the entire field of information theory by thinking about the
limits of transmitting information in the presence of noise.

Everybody is
familiar with white noise, the hissing sound associated with static.
White noise is random, in that any given sound frequency is as likely
to appear as any other. This is why it’s called white noise: Like
white light, it contains many frequencies evenly mixed together. But
it’s not the only kind of random noise possible; there’s actually
a whole spectrum of noises labeled with different colors, the most
important of which are pink and brown.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Pink noise
is also known as 1/f noise, which means that a frequency’s
likelihood of appearing is inversely proportional to that frequency.
That is, low-frequency sounds dominate over high frequency sounds.
Like white noise, the name comes by analogy to colors—the
low-frequency end of the visible spectrum is red, so the noise is
“tinted” pink. The pattern of pink noise actually turns up
naturally all over the place, most notably in music: If you plot the
distribution of frequencies in many compositions, it follows a 1/f
pattern.

Brown noise
is similar to pink noise, except that a frequency’s likelihood of
appearing is inversely proportional to the square
of the frequency (1/f2).
This means low-frequency sounds dominate even more than with pink
noise. (This time the name doesn’t relate to visible light, but
comes from “Brownian motion”—the random movements of particles
suspended in a liquid or gas as they are knocked around by
molecules.) Like pink noise, it turns up naturally in a lot of
places—including the wiring of our neurons, although the exact role
it plays is not fully understood.

Stephen Cass
is a freelance science and technology journalist based in Boston, who
frequently covers physics, aerospace, and computing.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .
close-icon Enjoy unlimited Nautilus articles, ad-free, for less than $5/month. Join now

! There is not an active subscription associated with that email address.

Join to continue reading.

You’ve read your 2 free articles this month. Access unlimited ad-free stories, including this one, by becoming a Nautilus member — 25% off for a limited time during our seasonal sale.

! There is not an active subscription associated with that email address.

This is your last free article.

Don’t limit your curiosity. Access unlimited ad-free stories like this one, and support independent journalism, by becoming a Nautilus member — 25% off for a limited time during our seasonal sale.