Automated Driving Technologies Will Never Be Perfect

Author:
Jason Lomberg, North American Editor, PSD

Date
04/23/2018

 PDF
And that’s fine. They don’t have to be.

As a species, we have an innate fear of a fictional robot apocalypse, and it colors our every reaction to semi-autonomous and autonomous systems. Case in point – the agita over the handful of semi-autonomous and autonomous vehicle fatal crashes.

Back in March, a Tesla Model X ran into a concrete lane divider at high speed, killing the driver in a horrific fiery wreck. The hook? The vehicle’s autopilot was engaged. Tesla claimed their autopilot system is a “driver-assistance system,” not self-driving and definitely not fully autonomous. Drivers are warned to keep their hands on the wheel, and they receive several visual and audio warnings if contact isn’t maintained for an extended period of time.

Tesla claimed that "The driver's hands were not detected on the wheel for six seconds prior to the collision." The family of the victim feels that Tesla is deflecting blame from its own faulty detection system, which allegedly misread the lane markings on that portion of the highway.

And this is hardly the first fatal accident involving semi-autonomous or autonomous vehicles. Also in March, a fully autonomous Uber SUV struck and killed a pedestrian who, by all accounts, wandered into traffic at the last second. It’s a common incident, but we hold AI to higher standards.

Traffic accidents are the 4th leading cause of death in the US, and the US Department of Transportation estimates that 94% of all serious crashes are due to human error. The DOT concludes that “automated driving innovations could dramatically decrease the number of crashes tied to human choices and behavior.”

So at least on the macro level, the issues of culpability are irrelevant. Semi-autonomous and autonomous vehicle technologies will make less mistakes and cause fewer fatal accidents than human drivers. But at the slightest hint that a human isn’t 100% culpable for their own death, we throw all logic and objectivity to the wind.

As with most new technologies – especially ones that remove humans from the controls – automated driving technologies subconsciously revolt us. Blaming DUIs, texting, or distractions for fatal crashes is oddly comforting. But automation – even “driver-assistance systems” – seems more random. There’s no easy culprit.

What if Tesla’s autopilot system is imperfect? And what if improper usage has fatal consequences? The National Highway Traffic Safety Administration estimates that Tesla’s autopilot results in 40% fewer crashes. Do we hold out for 100%?

Even Level 5 of the Society of Automotive Engineers’ automation roadmap – where “an Automated Driving System (ADS) on the vehicle can do all the driving in all circumstances” – will inevitably see fatal accidents. And short of artificial intelligence that evolves beyond its programming, even the most sophisticated automated systems will remain imperfect.

We need to accept the possibility of failure with semi-autonomous and autonomous driving technologies or be content with higher death rates that are nonetheless entirely our fault.

PSD

www.powersystemsdesign.com

RELATED

 


-->