ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. or Join now .
Sign up for the free Nautilus newsletter:
science and culture for people who love beautiful writing.
NL – Article speedbump

In the preface to Saint Joan, his play about Joan of Arc, the teenager whose visions of saints and archangels stirred soldiers into battle early in the 15th century, George Bernard Shaw makes a surprisingly compelling argument that following Joan of Arc’s mystical visions was at least as rational as following a modern-day general into today’s battlefield full of highly technological and incomprehensible weapons of war. His argument is that the warrior of the 20th century was driven as much by faith as the warrior of the 15th century:

In the Middle Ages people believed that the earth was flat, for which they had at least the evidence of their senses: We believe it to be round, not because as many as one percent of us could give the physical reasons for so quaint a belief, but because modern science has convinced us that nothing that is obvious is true, and that everything that is magical, improbable, extraordinary, gigantic, microscopic, heartless, or outrageous is scientific.

Nautilus Members enjoy an ad-free experience. Log in or Join now .

Hyperbole, for sure, but it is remarkable how much we depend on what we’re told to get by in the modern world. So little of what happens to us is understood through direct sensory experience. From the alarm that wakes us up, to the toilet that we wander to, to the smartphone that we turn on (before or after our visit to the bathroom), to the coffee machine that welcomes us into the kitchen, to the tap that we use to fill the coffee machine, nothing is completely within our conceptual grasp. But we use these tools; we even rely on them, because they work (except when they don’t and our life goes a little out of balance). We can thank the experts who created them, for we are dependent on their know-how. We have faith in the masters of modern technology after years of successfully using their devices. But when those devices fail, when the cable service goes out or the drain emits brown sludge, we’re rudely reminded of just how little we know about the conveniences of modern life.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

A “knowledge illusion” occurs because we live in a community of knowledge and we fail to distinguish the knowledge that is in our heads from the knowledge outside of it. We think the knowledge we have about how things work sits inside our skulls when in fact we’re drawing a lot of it from the environment and from other people. This is as much a feature of cognition as it is a bug. The world and our community house most of our knowledge base. A lot of human understanding consists simply of awareness that the knowledge is out there. Sophisticated understanding usually consists of knowing where to find it. Only the truly erudite actually have the knowledge available in their own memories.

The knowledge illusion is the flip side of what economists call the curse of knowledge. When we know about something, we find it hard to imagine that someone else doesn’t know it. If we tap out a tune, we’re sometimes shocked that others don’t recognize it. It seems so obvious; after all, we can hear it in our heads. If we know the answer to a general knowledge question (who starred in The Sound of Music?), we have a tendency to expect others to know the answer, too. The curse of knowledge sometimes comes in the form of a hindsight bias. If our team just won a big game or our candidate just won an election, then we feel like we knew it all along and others should have expected that outcome too. The curse of knowledge is that we tend to think what is in our heads is in the heads of others. In the knowledge illusion, we tend to think what is in others’ heads is in our heads. In both cases, we fail to discern who knows what.

It’s remarkable how much we depend on what we’re told to get by in the modern world.

Because we live inside a hive mind, relying heavily on others and the environment to store our knowledge, most of what is in our heads is quite superficial. We can get away with that superficiality most of the time because other people don’t expect us to know more; after all, their knowledge is superficial too. We get by because a division of cognitive labor exists that divides responsibility for different aspects of knowledge across a community.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

The division of cognitive labor is fundamental to the way cognition evolved and the way it works today. The ability to share knowledge across a community is what has allowed us go to the moon, to build cars and freeways, to make milkshakes and movies, to veg out in front of the TV, to do everything that we can do by virtue of living in society. The division of cognitive labor makes the difference between the comfort and safety of living in society and of being alone in the wild.

But there are also downsides when we rely on others to hold knowledge for us. You are probably familiar with Alice (of Wonderland fame), but few people today actually read the Lewis Carroll novels that introduced her to the world. Many know Alice indirectly, through movies, cartoons, and TV shows, not through the unique and mind-bending experience of reading Carroll’s marvelous books. If we don’t know calculus, we can’t understand the beauty of imagining time disappearing by letting it shrink into a moment and how that relates to the tangent of a curve. We can’t see what Newton saw that made him so important that the authorities buried him in Westminster Abbey. That’s one cost of living in a community of knowledge: We miss out on those things that we know only through the knowledge and experience of others.

There are also more dangerous consequences. Because we confuse the knowledge in our heads with the knowledge we have access to, we are largely unaware of how little we understand. We live with the belief that we understand more than we do.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

If you have used the Internet recently to work on a task, you’d find it hard to assess your ability as an individual to perform the task since it is so intertwined with the contribution of the Internet. All the evidence concerns the team, you and the computer operating together. And that team is naturally better at the task than an individual would be, so the evidence suggests that you’re better at the task than someone who didn’t have the advantage of the Internet at hand. Because thought extends beyond the skull and encompasses all the tools that are available to pursue goals, it’s well-nigh impossible to gauge exactly what your individual contribution is. Just like if we’re on a team and the team wins, then we win whether our role was large or small.

This has some worrying consequences. The Internet’s knowledge is so accessible and so vast that we may be fashioning a society where everyone with a smartphone and a Wi-Fi connection becomes a self-appointed expert in multiple domains.

When we have the whole world’s knowledge at our fingertips, it feels like it is in our heads.

In one study in collaboration with Adrian Ward, we asked doctors and nurses on the website Reddit about their experiences with patients who search for diagnoses on sites like WebMD before visiting their office. The medical professionals told us that such patients don’t actually know appreciably more than patients who haven’t consulted the Internet. Nonetheless, they tend to be highly confident about their medical knowledge. This can lead them to deny the professional’s diagnosis or seek alternative treatments. In another study we asked people to search the Internet for the answers to simple questions about finance, like “What is a stock share?” Next we asked them to play an investment game that was unrelated (the information they looked up was no help in performing better in the game). We also gave them the opportunity to bet on their performance. People who searched the Internet first bet a lot more on their performance than those who did not. But they didn’t do any better in the game and ended up earning less money.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

The problem is that spending a few minutes (or even hours) perusing WebMD is just not a substitute for the years of study needed to develop enough expertise to make a credible medical diagnosis. Spending a few minutes looking up facts on financial websites is not enough to understand the nuances of investing. Yet when we have the whole world’s knowledge at our fingertips, it feels like a lot of it is in our heads.

One of the most advanced forms of artificial intelligence for helping with everyday tasks is GPS (Global Positioning System) mapping software. GPS devices were becoming common in the 1990s and early 2000s; once the smartphone was introduced in 2007 with its built-in GPS, they were omnipresent. As you’re driving along, these formidable little systems map out optimal routes, display them visually, update their recommendations according to current traffic conditions and whether or not you’ve missed your turn, and will even speak to you. Their capacities and power are remarkable, so remarkable that they’ve completely changed the way most of us navigate. They have even changed many relationships, mostly for the better: No longer do couples have to bicker about whether to stop to ask for directions.

But notice what these amazing machines don’t do: They don’t decide to go the long route because you’re on your way to your parents’ house and you’d prefer to be late. They don’t take the route that goes by the lake because there’s a particularly beautiful sunset this evening. They don’t suggest that traffic is really bad today and that you’d be better off staying home. They could do any one of these things, but doing so would have to be programmed in. What they can’t do is read your mind to figure out your intentions—your goals and desires and your understanding about how to satisfy them—and then make those intentions their own in order to arrive at novel suggestions. They cannot share your intentions in order to pursue joint goals.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

We are fashioning a dangerous society where everyone with a smartphone is an expert.

We do not share common ground with our technology in the sense that there is no mutual agreement between a machine and a user about what we know and what we’re doing except in the most primitive sense. The machine can ask you if your goal is A, B, or C and respond appropriately to your answer. But it cannot share that goal with you in a way that would justify its taking the initiative to pursue a novel objective at the last second. You have an implicit contract with your machine that says the machine will do what it can do in order to help you pursue your goal. But you have to make sure you’ve told it what your goal is. The machine is not a collaborator; it’s a tool. In that sense, the tools of artificial intelligence are more like a microwave oven than another human being. Technology may be a big part of the community of knowledge by providing information and useful instruments, but it is not a member of the community in the same way that humans are. We don’t collaborate with machines just as we don’t collaborate with sheep; we use them.

The ability to share an intention is a critical part of what matters in an intelligent agent. Central human functions like language and conceptualization depend on it because they are both collaborative activities. We suspect it’s been hard to program a computer to share your intentionality because doing so would require the computer to be able to coordinate with others—to be aware of what you know and what others know; it would require an ability to reflect on one’s own cognitive processes and those of others. No one knows how to program a computer to be aware. If someone could, we would understand what it means to be conscious. But we don’t.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

We are at an awkward moment in the history of technology. Almost everything we do is enabled by intelligent machines. Machines are intelligent enough that we rely on them as a central part of our community of knowledge. Yet no machine has that singular ability so central to human activity: No machine can share intentionality. This has consequences for how humans and machines work together.

Modern airplanes simply cannot be flown without the help of automation. The most advanced military jets are fly-by-wire: They are so unstable that they require an automated system that can sense and act many times more quickly than a human operator to maintain control. Our dependence on smart technology has led to a paradox. As the technology improves, it becomes more reliable and more efficient. And because it’s reliable and efficient, human operators start to depend on it even more. Eventually they lose focus, become distracted, and check out, leaving the system to run on its own. In the most extreme case, piloting a massive airliner could become a passive occupation, like watching TV. This is fine until something unexpected happens. The unexpected reveals the value of human beings; what we bring to the table is the flexibility to handle new situations. Machines aren’t collaborating in pursuit of a joint goal; they are merely serving as tools. So when the human operator gives up oversight, the system is more likely to have a serious accident.

What technology can’t do is read your mind to figure out your intentions.

The automation paradox is that the very effectiveness of automated safety systems leads to a dependence on them, and that this dependence undermines the contribution of the human operator, leading to greater danger. Modern technology is extremely sophisticated and getting more so. Automated safety systems are improving. As they get more complex and include additional bells and whistles and backup systems, they get exploited to do more and more. When they fail, the resulting catastrophe is that much bigger. The irony is that automated systems on airplanes, trains, and industrial equipment can compromise overall safety. Because the technology doesn’t understand what the system is trying to accomplish—because it doesn’t share the humans’ intentionality—there’s always the danger that something will go wrong. And when the human part of the system isn’t ready for the technology to fail, disaster can ensue.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Here’s a case in point: An airplane stall occurs when the craft’s airspeed is not sufficient to generate enough lift to keep the plane in flight. If it stalls, the airplane essentially falls from the sky. A good way to recover from a stall is to point the nose of the plane down and increase engine power until the plane’s airspeed generates sufficient lift to keep the plane aloft. Stall recovery is one of most basic skills that prospective pilots master in flight school. This is why investigators were shocked when they recovered the black box from Air France Flight 447, which crashed into the ocean in 2009, killing 228 people. The Airbus A330 had entered a stall and was falling from the sky. The copilot inexplicably tried to push the nose of the plane up rather than down. How could this happen? A report commissioned by the Federal Aviation Administration in 2013 concluded that pilots have become too reliant on automation and lack basic manual flying skills, leaving them unable to cope in unusual circumstances. In this case, the flight crew may have been unaware that it was even possible for this plane to stall and did not properly interpret the warning signals provided by their equipment. This is a perfect example of the automation paradox: The plane’s automation technology was so powerful that when it failed, the pilots as a group didn’t know what to do.

You may have already experienced the automation paradox, thanks to the proliferation of GPS devices. Some people have such a close relationship with them that they do whatever their GPS tells them to do. It is easy to forget that your GPS device doesn’t really understand what you’re trying to accomplish. There are many stories of people driving into bodies of water and off cliffs because they were so busy obeying their GPS master.

One of the skills that comes along with being aware of oneself is the ability to reflect on what’s going on. People can always observe and evaluate their own behavior. They can step back and make themselves aware of what they’re doing and what’s happening in their immediate environment. They can even observe some of their own thought processes (the deliberative, conscious parts). If they don’t like what they see, they can exert some influence to change it. That influence is limited, to be sure. If you’re sliding down a sheet of ice without an ice pick, then there’s little you can do to stop. Similarly, if you’re obsessed by some fear or desire, you may not be able to control that, either. But at least we have the capacity—when we’re awake and conscious—to be aware of what’s happening. To the degree that we have control over our actions (if, for instance, we’re not being drawn uncontrollably to a big slice of chocolate cake in front of us), we can modify our actions.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

By contrast, machines always have to obey their programs. Their programs may be sophisticated and there are ways of programming them to adapt to changing environments. But in the end, if the designer of the program has not thought of a situation that the machine does not know how to respond to, and that situation in fact occurs, the machine is going to do the wrong thing. So a critical role for human beings is oversight—just being there in case something goes terribly wrong. The big danger today is that no one has access to all the knowledge necessary to understand and control modern sophisticated technology. And technology is getting more sophisticated at an even faster rate than ever.

Steven Sloman is a professor of cognitive, linguistic, and psychological sciences at Brown University. He is the editor in chief of the journal Cognition.

Philip Fernbach is a cognitive scientist and professor of marketing at the University of Colorado’s Leeds School of Business.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

From The Knowledge Illusion: Why We Never Think Alone by Steven Sloman and Philip Fernbach, published by Riverhead Books, an imprint of Penguin Publishing Group, a division of Penguin Random House LLC. Copyright © 2017 by Steven Sloman and Philip Fernbach.

Lead Image Art created with: Stockbakery; Borysevych.com / Shutterstock; Thomas Kaiser / Wikipedia

close-icon Enjoy unlimited Nautilus articles, ad-free, for less than $5/month. Join now

! There is not an active subscription associated with that email address.

Join to continue reading.

You’ve read your 2 free articles this month. Access unlimited ad-free stories, including this one, by becoming a Nautilus member.

! There is not an active subscription associated with that email address.

This is your last free article.

Don’t limit your curiosity. Access unlimited ad-free stories like this one, and support independent journalism, by becoming a Nautilus member.