Who Pays the Piper When A.I. Causes Traffic Deaths?

Author:
Jason Lomberg, Editor, North America, PSD

Date
06/28/2017

 PDF

2016 was the most deadly year for motorists in nearly a decade, and human error causes up to 90% of accidents, making self-driving cars seemingly inevitable.

This raises a host of questions, not the least of which is ‘what happens when autonomous vehicles cause traffic deaths?’ Who’s at fault?

It’s no small matter -- consulting firm KPMG estimates that autonomous cars could prompt an 80% reduction in motor vehicle accidents by 2040. The catch? The auto insurance market could shrink by up to 60%.

What remains could be a radically different business. Once humans are removed from the controls, personal liability insurance becomes completely irrelevant.

“The risk profile of vehicles is changing daily, and the subsequent drop in industry loss costs would reduce the size of the auto insurance market, trigger consolidation in the personal lines space, attract new competitors, and force dramatic operational changes within carriers,” says Jerry Albright, principal in KPMG’s Actuarial and Insurance Risk practice.

But what of these ‘dramatic operational changes’? How would juggernauts like Geico and Allstate survive when individuals are never liable?

The consensus seems to be that carriers would shift their focus to OEMs.

“With driverless vehicles, auto insurers might shift the core of their business model, focusing mainly on insuring car manufacturers from liabilities from technical failure of their AVs,” notes McKinsey Analytics. “This change could transform the insurance industry from its current focus on millions of private consumers to one that involves a few OEMs and infrastructure operators.”

Before autonomous cars become ubiquitous, we need to decide the paramount question of liability. If self-driving vehicles collide, can they swap insurance info via Bluetooth? Do the drivers trade OEM business cards?

A similar conundrum dogs the widespread rollout of autonomous soldiers – who’s responsible when faulty A.I. kills people? Human error is tragic, but we have a primal, inexplicable fear of machines (too much Sci-Fi, perhaps?). Many of us demand perfection from robots, even if humans – especially on the road – are far more fallible. And any hint of robotic autonomy gives us fevered dreams of Terminators and Skynet.

I’m willing to have a reasonable debate about autonomous vehicles (and any hint of introducing them by legislative fiat), but irrational fears shouldn’t drive the discussion. Autonomous cars are imminent. It’s up to us – and especially the engineering community – to facilitate their responsible deployment.

PSD

www.powersystemsdesign.com

RELATED

 


-->