ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. or Join now .
Sign up for the free Nautilus newsletter:
science and culture for people who love beautiful writing.
NL – Article speedbump

In 2014, Google fired a shot heard all the way to Detroit. Google’s newest driverless car prototype had no steering wheel and no brakes. The message was clear: Cars of the future will be born fully autonomous, with no human driver needed or desired. Even more jarring, rather than retrofit a Prius or a Lexus as Google did to build its previous two generations of driverless cars, the company custom-built the body of its youngest driverless car with a team of subcontracted automotive suppliers. Best of all, the car emerged from the womb already an expert driver, with roughly 700,000 miles of experience culled from the brains of previous prototypes. Now that Google’s self-driving cars have had another few more years of practice, the fleet’s collective drive-time equals more than 1.3 million miles, the equivalent of a human logging 15,000 miles a year behind the wheel for 90 years.

Nautilus Members enjoy an ad-free experience. Log in or Join now .

In response, car companies are pouring billions of dollars into software development and the epicenter of automotive innovation has moved from Detroit to Silicon Valley. If the car companies had the power to define the transition to driverless cars, they’d favor a very gradual process. Stage 1 would involve refining driver-assist technologies. Stage 2 would involve implementing a few high-end models with limited autonomous capability in specific situations, most likely on highways. In Stage 3, limited autonomous capacity would trickle down to cheaper car models.

Humans and robots should not take turns at the wheel.

Consulting firm Deloitte describes such a gradual approach as one that’s incremental, “in which automakers invest in new technologies—e.g., antilock brakes, electronic stability control, backup cameras, and telematics—across higher-end vehicle lines and then move down market as scale economics take hold.” Such a cautious approach, although appealing to an industry incumbent, may actually be unwise. For car companies, inching closer toward autonomy by gradually adding computer-guided safety technologies to help human drivers steer, brake, and accelerate could prove to be an unsafe strategy in the long run, both in terms of human life and for car industry bottom lines.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

One reason car companies favor an incremental approach is that it prolongs their control over the automotive industry. Driverless cars need an intelligent on-board operating system that can perceive the car’s surroundings, make sense of the data that’s flowing in, and then act appropriately. Software capable of artificial intelligence—especially artificial perception—requires skilled personnel and a certain depth of intellectual capital to create. Car companies, while extraordinarily adept at creating complex mechanical systems, lack the staff, culture, and operational experience to effectively delve into the thorny thickets of artificial-intelligence research. Google, on the other hand, is already there.

RESPONSIBLE ROBOTS: Google is developing self-driving cars with the mindset that it’s safer for humans not to be involved at all. Grendelkhan / Wikimedia

Driverless cars introduce uncertainty into the automotive industry. For the past century, selling cars directly to consumers has been a good business. However, if driverless cars enable consumers to pay per ride rather than buy their own car, the business of selling generic car bodies to transportation companies that lease out driverless taxis might not be as lucrative. If car companies are someday forced to partner with a software company to build driverless cars, such a partnership could result in car companies taking home a smaller slice of the final profits.

Like a growing kitty in the middle of an all-night poker game, there’s a lot of money sitting on the table. Former University of Michigan professor and GM executive Larry Burns explains that there’s a gold mine tucked into the 3 trillion miles a year that people drive each year (in the United States). He said, “If a first-mover captures a 10 percent share of the 3 trillion miles per year and makes 10 cents per mile, then the annual profit is $30 billion, which is on par with Apple and ExxonMobil in good years.”

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Car companies and Google are like gigantic tankers on a collision course, both slowly cruising toward a common destination: to wring the most profits from the next generation of automated cars. Car companies favor an evolutionary approach, to develop driver-assist modules to the point where they can take over the wheel for extended periods of time. In contrast, Google’s strategy aims to dive directly into full autonomy.

Car companies aren’t the only ones that prefer a gradual approach. The U.S. Department of Transportation and the Society of Automotive Engineers have each sketched out their own maps of the road to full autonomy. While their stages differ slightly, what they have in common is the assumption that the best way forward is via a series of gradual and linear stages in which the car’s “driver assist” software temporarily takes over the driving, but quickly gives control of the car back to the human driver should a sticky situation occur.

We disagree with the notion that a gradual transition is the best way to proceed. For many reasons, humans and robots should not take turns at the wheel. Many experts, however, believe that the optimal model is to have human and software share control of the wheel, that the human driver should remain the master and the software the servant. Software that’s based on a paradigm in which humans and machines are partners is known to engineers as human in the loop software. In many situations, pairing up a human and a computer does indeed yield excellent results. Skilled human surgeons use robotic arms to achieve inhuman precision during surgery. Today commercial airplanes use human in the loop software, as do many industrial and military applications.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Arguments in favor of keeping humans in the loop have their appeal. It’s an enticing thought experiment to dream of meticulously wiring together the best of human ability with the best of machine ability, similar to the intoxicating optimization puzzle of hand-picking professional football players for a fantasy football team. Machines are precise, tireless, and analytical. Machines excel at detecting patterns, performing calculations, and taking measurements. In contrast, humans excel at drawing conclusions, making associations between apparently random objects or events, and learning from past experience.

In theory, at least, if you combine a human with an intelligent machine, the result should be an alert, responsive, and extremely skilled driver. After all, the advantage of human in the loop approaches to automation is that it’s possible to harvest the strengths of what humans and machines do best.

In reality, human in the loop software could work in the case of a driverless car only if each party (human and software) maintained a clear and consistent set of responsibilities. Unfortunately, maintaining clear and consistent sets of responsibilities between human and software is not the model that’s being proposed by the automotive industry and federal transportation officials. Instead, their proposed approach keeps the human in the loop, but with unclear and shifting responsibilities.

At the core of this strategy of gradual transition is the assumption that should something unexpected occur, a beep or vibration will signal the human driver that she needs to hastily climb back into the driver’s seat to deal with the situation. A gradual and linear path toward full automation may sound sensible and safe. In practice, however, a staged transition from partial to full autonomous driving would be unsafe.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

People trust technology very quickly once they see it works.

Machines and humans can work together well in some situations, but driving is not one of them. Driving is not an activity suitable for a human in the loop approach for one major reason: Driving is tedious. When an activity is tedious, humans are all too happy to let machines take over, so they eagerly cede responsibility

When I was in officer training in the navy, I learned that one of the core tenets of good management was to never divide a mission-critical task between two people, a classic management blunder known as split responsibility. The problem with split responsibility is that, ultimately, both people involved in completing the task may feel it’s safe to drop the ball, assuming the other person will pick up the slack. If neither party dives in to the rescue, the result is mission failure. If humans and machines are given split responsibility for driving, the results could be disastrous.

A harrowing example of split responsibility between man and machine was the plight of Air France Flight 447, which, in 2009, plunged into the Atlantic Ocean, tragically killing all 228 people on board. Later analysis of the plane’s black box revealed that the cause of the crash was not terrorism or a mechanical malfunction. What went wrong was the handoff from automated flight mode to the team of human pilots.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

While in flight, the plane’s autopilot software became covered in ice and unexpectedly shut down. The team of human pilots, befuddled and out of practice, were suddenly called to the controls on what they expected would be a routine flight. When thrust into an unexpected position of responsibility, the human pilots made a series of disastrous errors that caused the plane to nosedive into the sea.

In the fall of 2012, several Google employees were allowed to use one of the autonomous Lexuses for the freeway portion of their commute to work. The idea was that the human driver would guide the Lexus to the freeway, merge, and, once settled into a single lane, turn on the self-driving feature. Every employee was warned that this was early stage technology and they should be paying attention to the driving 100 percent of the time. Each car was equipped with a video camera inside that would film the passenger and car for the entire journey.

Employee response to the self-driving car was overwhelmingly positive. All described the benefits of not having to tussle with freeway rush-hour traffic, of arriving home refreshed to spend quality time with their families. Problems arose, however, when the engineering team watched the videos from these drives home. One employee turned completely away from the driver’s seat to search his back seat for a cell-phone charger. Other people took their attention away from the wheel and simply relaxed, relieved to have a few peaceful moments of free time.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

The Google report described the situation of split responsibility, or what engineers call  automation bias. “We saw human nature at work: People trust technology very quickly once they see it works. As a result, it’s difficult for them to dip in and out of the task of driving when they are encouraged to switch off and relax.”

A TEXT OR NOT A TEXT: A recent study found that teenagers understand texting while driving is dangerous—but don’t think checking Twitter or taking a photo of a passenger falls under that category.Paul Oka / Flickr

Google’s conviction that there’s no middle ground—that humans and machines should not share the wheel—sounds risky, but is actually the most prudent path forward when it comes to consumer safety. Automation can impair a driver in two ways: first, by inviting him to engage in secondary task engagements, activities such as reading or watching a video that directly distract him from watching the road; second, by disrupting his situational awareness, or his ability to perceive critical factors in the driving environment and to react rapidly and appropriately. Put the two together—a distracted driver who has no idea of what’s happening outside the car—and it’s clear why splitting the responsibility for driving is such a dangerous idea.

Research at Virginia Tech University sponsored by GM and the U.S. Department of Transportation Federal Highway Administration put some numbers around the temptation humans face when a capable technology offers to unload a tedious task. Virginia Tech researchers evaluated 12 human drivers on a test track. Each test vehicle was equipped with two forms of driver-assist software: one that managed lane centering, and another that handled the car’s braking and steering, called adaptive cruise control. The goal of the study was to measure how humans reacted when presented with driving technologies that took over the car’s lane keeping, maintained the car’s speed, and handled its braking. To measure the human driver’s activities during the study, each vehicle was equipped with data collection and recording devices.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Researchers recruited 12 individuals, 25 to 34 years old, from the general population of Detroit and offered $80 for their participation. Recruited drivers were asked to pretend they were taking a long trip and were not only encouraged to bring their cell phones with them on their test drive, but were provided with ready access to reading material, food, drinks, and entertainment media. As participants showed up for the study, researchers explained to them that someone from the research team would be joining them inside the vehicle. Each driver was told that their fellow passenger had a homework assignment he needed to complete during the trip, so he would be watching a DVD on his laptop for most of the drive.

The best way to avoid collisions will be to teach driverless cars to drive more like people, carelessly and illegally.

The 12 human subjects were placed into common freeway driving scenarios on the test track and their responses and activities were measured and recorded. The researcher’s goal was twofold: one, to gauge the temptation to engage in secondary tasks such as eating, reading, or watching a video; two, to measure the degree to which driver attention would wander if software were to handle most of the driving. In other words, researchers were testing whether automated driving technologies would encourage humans to engage in unsafe misbehaviors such as mentally tuning out, engaging in inappropriate behavior while behind the wheel, or losing their situational awareness, including their ability to perceive critical factors in the driving environment.

It turned out that most human drivers, when presented with technology that will drive for them, eagerly become guilty of all three bad driving behaviors. The “fake homework” strategy of the researcher, combined with the competence of the adaptive cruise control and lane centering software, lulled the participants into feeling secure enough to stop paying attention behind the wheel. Over the course of approximately three hours of test driving time, during which different automated driving technologies were used, most drivers engaged in some form of secondary task, most frequently eating, reaching for an item in the back seat, talking on the cell phone and texting, and sending emails.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

The lane-keeping software especially invited the human drivers to engage in secondary activities. When the lane-keeping software was switched on, a whopping 58 percent of the drivers watched a DVD for some time during the trip. Twenty-five percent of the drivers enjoyed the free time to get some reading done, increasing their risk of a car crash by 3.4 times.

The human drivers’ visual attention was not much better. Once again, when the lane-centering software took the wheel, driver attention wandered. Overall, drivers were estimated to be looking away from the road about 33 percent of the time during the course of the three-hour trip. More dangerously, the drivers engaged in long and potentially dangerous “off-road glances” lasting more than two seconds an estimated 3,325 times over the course of the study. The good news, however, was that these deadly long off-road looks occurred only 8 percent of the time.

Clearly, this particular study is just a starting point. Twelve people is a fairly small control group and more research on driver inattention is needed. One interesting finding that emerged was that although most drivers were eager to read, eat, watch movies, or send email while at the wheel, some were able to resist the temptation to tune out. For reasons that deserve additional research, the study revealed that not all human drivers were so quick to give up their responsibilities at the wheel. As researchers concluded, “this study found large individual differences in regard to the nature and frequency of secondary task interactions suggesting that the impact of an autonomous system is not likely to be uniformly applied across all drivers.”

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

There’s a tipping point at which autonomous driving technologies will actually create more danger for human drivers rather than less. Imagine if the 12 human drivers in Virginia Tech’s research project were given a seat in a fully autonomous vehicle for a three-hour drive. It is highly likely that the intensity of their secondary activities would increase to the point where the human driver would fall asleep or become deeply absorbed in sending email. Full autonomy would make it nearly impossible for a deeply distracted or sleepy human driver to effectively take over the wheel in a challenging situation if control were abruptly handed over.

In another study, at the University of Pennsylvania, researchers sat down with 30 teens for a frank discussion of teen drivers’ cell-phone usage while at the wheel. Two central points emerged. While teens said they understood the dangers of texting while driving, they still did it. Even teens who initially claimed they did not use their cell phones while driving, revealed reluctantly when pressed that they would wait until they were at a red light to send a text. Also, teens used their own classification system to define what constituted “texting while driving” and what didn’t. For example, they said that checking Twitter while driving did not constitute texting; nor did taking a passenger’s picture.

Wandering human attention is one risk. Another risk of having humans and software share the wheel lies in the fact that if not used regularly, human skills will degrade. Like the pilots of Flight 447, human drivers, if offered the chance to relax behind the wheel, will take it. If a human hasn’t driven in weeks, months, or years, and then is suddenly asked to take the wheel in an emergency situation, not only will the human not know what’s going on outside the car, but her driving skills may have gotten rusty as well.

The temptation to engage in secondary tasks and the so-called handoff problem of split responsibility between human and machine are such significant dangers in human/machine interactions that Google has opted to skip the notion of a gradual transition to autonomy. Google’s October 2015 monthly activity report for its driverless-car project concludes with a bombshell: Based on early experiments with partial autonomy, the company’s strategy path forward will focus on achieving only full automation. The report states, “In the end, our tests led us to our decision to develop vehicles that could drive themselves from point A to B, with no human intervention. … Everyone thinks getting a car to drive itself is hard. It is. But we suspect it’s probably just as hard to get people to pay attention when they’re bored or tired and the technology is saying ‘don’t worry, I’ve got this … for now.’ ”

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

At the time this was written, Google’s driverless cars had a total of 17 minor fender benders and one low-speed collision with a bus. In the 17 fender benders, the culprit was not the driverless car, but the other human drivers. On Feb. 14, 2016, however, Google’s car had its first significant accident when it “made contact” with the side of a city bus. Unlike the previous 17 minor collisions, this accident was the fault of the car’s software because it erroneously predicted that if the car rolled forward, the bus would stop.

With the exception of the run-in with the bus, the rest of Google’s accidents have happened because, ironically, Google’s cars drive too well. A well-programmed autonomous vehicle follows driving rules to the letter, confusing human drivers who tend to be less meticulous behind the wheel, and not always so law-abiding. The typical accident scenario involves one of Google’s obedient driverless cars trying to merge onto a highway or turn right at a busy intersection on a red light. Impatient human drivers, not understanding the car’s precise adherence to speed limits or lane-keeping laws, accidentally run into the driverless car.

So far, fortunately, none of Google’s accidents have resulted in any injuries. In the near-term future, the best way to avoid collisions will be to teach driverless cars to drive more like people, carelessly and illegally. In the longer term future, the best way to solve the problem of human drivers will be to replace them with patient software that never stops paying attention to the road.

As car and tech companies gather at the table to play their high-stakes, global game of automotive poker, it remains to be seen who will have the winning hand. If federal officials pass laws that mandate a “human in the loop” approach, the winner will be car companies, who will retain control over the automotive industry. On the other hand, if eventually the law permits, or—for safety reasons—even requires full autonomy for driverless cars, then software companies will take the lead.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Google retains some major advantages as the undisputed industry leader in digital maps and deep-learning software. From the perspective of business strategy, Google’s lack of a toehold in the automotive industry could actually be one of its key strengths. Analyst Kevin Root writes that “Unlike OEMs, they [Google] are not encumbered by … lost revenue from bypassing the new feature trickle down approach, they are developing for the end state of fully autonomous driverless cars and appear to have a sizeable lead.” Add to that Google’s eagerness to create a new revenue stream that’s not reliant on selling Internet ads, currently its primary source of revenue.

One thing is clear: Regardless of how the transition to driverless cars unfolds, the automotive industry will be forced to develop new core competencies. In order to remain a player in the new industry of selling driverless cars, car companies will have to master the difficult art of building artificial-intelligence software, a challenge that has eluded the world’s best roboticists for decades.

Hod Lipson is a professor of mechanical engineering at Columbia University and an author of the award-winning book Fabricated: The New World of 3D Printing. Melba Kurman writes about disruptive technologies and is an author of the award-winning book Fabricated: The New World of 3D Printing.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Reprinted with permission from the MIT Press. 

Lead image credit: Shutterstock

close-icon Enjoy unlimited Nautilus articles, ad-free, for less than $5/month. Join now

! There is not an active subscription associated with that email address.

Join to continue reading.

You’ve read your 2 free articles this month. Access unlimited ad-free stories, including this one, by becoming a Nautilus member — 25% off for a limited time during our seasonal sale.

! There is not an active subscription associated with that email address.

This is your last free article.

Don’t limit your curiosity. Access unlimited ad-free stories like this one, and support independent journalism, by becoming a Nautilus member — 25% off for a limited time during our seasonal sale.