ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. or Join now .

Thought-provoking science stories.

No-brainer intro price.

Thought-provoking science stories.

No-brainer intro price.

The full Nautilus archive eBooks & Special Editions Ad-free reading

  • The full Nautilus archive
  • eBooks & Special Editions
  • Ad-free reading
Join
Explore
In Body Image
Nautilus Members enjoy an ad-free experience. Log in or Join now .

My background is in mathematics, which has a reputation for being a solid, sturdy subject. Once something has been proven, it generally stays true forever. But in the course of researching my book, I came to realize that even in mathematics, certainty can be a dangerous thing. In the 19th century, several key mathematical theorems began to unravel. For two millennia, European mathematicians had taken inspiration from the natural world. As a result, they had concluded that certain things were intuitively obvious: Geometric shapes obeyed rules like “the whole is larger than the part” and rates of change followed the smooth motion of a falling object. But when dealing with infinities or abstract dimensions, these assumptions no longer held. By the 1870s, mathematicians had discovered awkward theoretical counterexamples, like a part that was the same size as the whole, and a motion—known as a “nowhere differentiable function”—that was never smooth.

As I dug deeper while writing my book, I found myself wondering whether other cultures had relied so heavily on real-life geometry, inspired by the natural world, when developing their concepts of mathematical truth. China had embraced concepts like negative numbers—an abstract concept that cannot be easily visualised—much earlier than in Europe, because their early texts focused on problems involving fortunes and debt. It turns out that the reliance on geometric intuition had effectively been a Trojan horse for European mathematicians, smuggling flawed assumptions into their work. In the 19th century, some established researchers dismissed the emerging counterexamples as “monsters” and “an outrage against common sense”; they were nuisances to be shunned. But over time, these monsters became unavoidable—and even useful. Modern research now relies on these once-undesirable ideas: Astrophysics requires non-standard geometric rules, while probability theory is built on endlessly unpredictable changes.

In Body Image
TRUTHINESS: Adam Kucharski says that though misinformation is clearly an enormous problem in our current era, not enough attention is given to the equally troublesome matter of excessive doubt about true information. Telling people you can’t believe anything on social media could backfire, he says. Photo courtesy of Adam Kucharski.
ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .
In Body Image

Over the past decade, I’ve spent a lot of time writing about—and in some cases building—AI algorithms. One thing that’s often bugged me is why seemingly advanced AI can occasionally make such basic errors. Take image recognition. Famously, the addition of a faint digital watermark can cause an AI classifier to confuse an image of a panda for a gibbon.

While writing my book, I interviewed Tony Wang, an AI researcher who’d become interested in two specific hypotheses to explain these errors. First, it could be that the algorithms were implicitly mimicking the fast, instinctive mental processing that can also lead humans to a bad snap judgement. This would suggest an algorithm that reflected a bit more on its decisions could avoid such errors. The second possibility was that AI just isn’t good enough yet, and a truly “superhuman” version would outgrow these errors.

Modern research now relies on once-undesirable ideas.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

To test these hypotheses, Wang and his colleagues focused on the complex game of Go, which has been mastered by AI in the past decade. They started by training an “adversarial” algorithm that searched for flaws in top Go-playing AI software. Eventually, they found two absurd strategies that could beat the software—the sorts of strategies that even an amateur human wouldn’t be fooled by. Yet the “superhuman” AI player remained highly confident of victory, right up to the moment it lost.

This suggested that neither hypothesis was correct. Even the reflective AI capable of playing a game as complex as Go wasn’t safe, and seemingly “superhuman” software still fell for ridiculous tricks. No matter how intelligent an AI appears, unexpected weaknesses may be inevitable.

In Body Image

When I began writing Proof, the effect of misinformation—particularly during the worst months of the COVID-19 pandemic—was very much on my mind. In January 2021, rioters were storming the United States Capitol while Covid deniers were harassing medics outside hospitals. Why do so many people believe things that aren’t true?

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Today, much attention is given to misinformation, and the supposed flood of falsehoods online. In response, researchers and policy makers have sought ways to reduce false beliefs. But the more I looked, the more this simple story unravelled. A vocal minority do consume extreme amounts of false information and conspiracy theories online, but the broader picture shows most people still engage far more with trustworthy news sources than dubious ones.

“To doubt everything or to believe everything are two equally convenient solutions.”

This imbalance leads to a problem: If a study only measures belief in false content, an intervention that reduces belief in all information will appear to be successful. Telling people “you can’t trust anything you read on social media” might protect them from lies, but it also may have the pernicious side effect of undermining their trust in the truth.

In the early 20th century, mathematician Henri Poincaré noted that “To doubt everything or to believe everything are two equally convenient solutions.” The dominant focus in recent years has been on the risk of believing too much, but I’ve realised not enough attention has been given to the threat of excessive doubt. We must look at the deeper reasons that people disengage with true information and disregard valid evidence and experts.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

The relationship between truth and trust will become more challenging as science becomes more complex. Poincaré was once described as the “last universalist”; no mathematician since has excelled in all areas of the field as it existed at the time. Put simply, there are now just too many mathematical topics to master. The same is true of other scientific fields. From climate analysis to AI, even experts must heavily specialize. Now, more than ever, science and technology rely on building and maintaining trust—in experts, in institutions, and in machines.

Lead image by Tasnuva Elahi; image by wickerwood / Shutterstock

Fuel your wonder. Feed your curiosity. Expand your mind.

Access the entire Nautilus archive,
ad-free on any device.

! There is not an active subscription associated with that email address.

Subscribe to continue reading.

You’ve read your 2 free articles this month. Access unlimited ad-free stories, including this one, by becoming a Nautilus member.

! There is not an active subscription associated with that email address.

This is your last free article.

Don’t limit your curiosity. Access unlimited ad-free stories like this one, and support independent journalism, by becoming a Nautilus member.