Who is in the Hot Seat when No One is in the Driver’s Seat? How Self-Driving Cars Present a Difficult Question of Personal Liability

Automated vehicles with automated driving abilities have become a far more common sight recently. Regardless of who or what may be at fault for accidents involving autonomous cars, it is clear that the law on the subject will have to soon address this issue.

Photo: Courtesy of Unsplash.com, Bram Van Oost
There was once a time when the idea of self-driving cars seemed like something straight out of a movie, reminiscent of the scenes from Minority Report and Total Recall.

Now, vehicles with automated driving abilities have become a far more common sight.  According to a report conducted in 2020, the sales of cars in 2019 with partial automation, such as emergency braking, self-parking, and adaptive cruise control, were up over 460% compared with 2018.

As self-driving cars become more realistic, it may not be surprising that the laws surrounding negligence and product liability face new challenges.

As cars have slowly become more and more autonomous over the past decades, a system has been created to categorize the differences in automation.  Guidelines promulgated by the Society of Automotive Engineers (SAE) breaks down the five varying levels of automation in vehicles as follows:

Level One: cars in which just a single aspect is automated, but the driver remains in complete control of the car.  These vehicles have been around since the 1990s and utilize features like cameras and sensors to offer relatively simplistic automation such as lane assistance and cruise control.

Level Two: cars in which two or more elements are controlled by chips or sensors, though a human still manually operates the vehicle.  These vehicles’ systems are based on multiple data sources intelligently designed to weave together the ability to accelerate, brake, and steer on behalf of drivers.

Level Three: it is still essential that human drivers remain engaged and ready to intervene, but all aspects of driving can be accomplished by the car.  Level Three vehicles can drive themselves in limited environments while assuming control of all critical safety functions.

Levels Four & Five: these levels are expected to be achieved within the next decade as drivers become obsolete.  Vehicles will be expected to use features like high definitions mapping, car-to-car communication systems, and refined radars to eliminate the need for human drivers.

Most cars on the market today are level two vehicles, though some are borderline level three vehicles with advanced safety features.  While these level two and three vehicles still require a human driver, they are frequently referred to as “self-driving cars.”

Though far less common than accidents involving a manually operated vehicle, accidents involving autonomous cars are not unheard of.

Researchers estimate autonomous vehicles could reduce accident rates by up to 90%, which would save more than 30,000 lives each year and avoid millions of injuries on American roads.  Advancements in technology can mitigate many human weaknesses, as evidenced by a statement made by General Motors Chairman Bob Lutz, “The autonomous car doesn’t drink, doesn’t do drugs, doesn’t text while driving, and doesn’t get road rage. Autonomous cars don’t race other autonomous cars, and they don’t go to sleep.”  Though people may eventually be safer in autonomous cars than in traditional vehicles, there’s a long way to go and a short time to get there.

In the Spring of 2016, history was made when a Tesla Model S driver became the first person to die in a self-driving car.  The driver, Joshua Brown, had turned on the autonomous driving system of his Tesla and set the cruise control to 74 miles per hour.  As the car raced down a Florida highway, a tractor-trailer from an intersecting road crossed the highway.  With the autopilot system still engaged, Brown’s car struck the tractor-trailer, killing Brown.

According to an investigation conducted by the National Highway Traffic Safety Administration (NHTSA), the vehicle’s self-driving system was not at fault.  The agency found that while the autopilot system was intended to prevent Tesla cars from rear-ending other vehicles, it was not intended to handle situations in which vehicles intersect the path of the vehicle from adjacent roadways.  In short, there were no “defects in the design or performance” of the system.  Ultimately, Tesla avoided liability for Brown’s death because autopilot was intended to aid, not replace, human drivers.

The current litigation landscape provides insight into who or what may ultimately be responsible for accidents involving autonomous cars.  

Unfortunately, since the first fatal autopilot crash in 2016, several other tragic accidents have occurred in the realm of “self-driving cars.”  These accidents have all begged the paramount question of who–or what–is really responsible.  Tesla may have avoided liability in the case of Joshua Brown, but technology is changing.  Google, Mercedes-Benz, Tesla, Uber, and Volvo are some of the companies working to develop fully autonomous cars, intended to drive themselves without human intervention.  Google’s prototypes do not have steering wheels or brake pedals, rendering a human driver superfluous.  The question of liability is even more pertinent as car manufacturers begin to release level three cars and design level four cars.

When answering the various liability questions of who will be responsible for a car’s “negligence,” there are many possible answers.  Does the answer lie with car manufacturers such as Tesla and General Motors?  Or perhaps the developers of the autonomous driving technology, like Google?  Or possibly, the person in the driver’s seat who might not be actually “driving” the car?

Guidelines have been proposed to adjust the personal liability of drivers based on their ability to operate autonomous vehicles actively.

The National Highway Traffic Safety Administration (NHTSA) opined in a report on autonomous vehicles that different legal standards of liability should apply “based on whether the human operator or the automated system is primarily responsible for monitoring the driving environment.”  This means for level four and five vehicles, and potentially level three vehicles, the “Highly Automated Vehicle” (HAV) system itself would be deemed the driver of the vehicle for purposes of state traffic laws.  In other words, the driving system, not the passengers, would be criminally liable when traffic laws are violated.  The rationale behind this proposed liability is sound. If a passenger of a fully automated car has no ability to operate the vehicle, it will not serve any purpose of criminal law to prosecute the individual.

Yet as Gary Marchant, Director of the Center for Law, Science, and Innovation at the Sandra Day O’Connor College of Law argues, “multiple defendants are possible: the company that wrote the car’s software; the businesses that made the car’s components; the automobile manufacturer; maybe the fleet owner, if the car is part of a commercial service, such as Uber.  How would you prove which aspect of the HAV failed and who should be the liable party?”

Possible solutions could include hiring experts to investigate the system malfunctions of autonomous cars.  This would transform relatively simple civil traffic accidents into costly, lengthy liability litigation cases with multiple defendants. Another proposed solution comes in the form of strict liability.  A strict liability framework would place the burden on manufacturers of HAVs rather than the individual, based on the argument that the manufacturers are more equipped, financially and legally, to dispute liability.

The marketing rhetoric used in conjunction with automated cars sets the drivers’ expectations and complicates the already difficult question of liability.

For the most part, vehicles on the market today equipped with autonomous driving technology (level one or two vehicles) clearly state that the technology is not a substitute for the driver’s full attention.  However, this is counterintuitive to the very purpose of the technology.  As a result, manufacturers walk a thin line when advertising their vehicles’ capabilities.  For example, Tesla CEO Elon Musk came under fire for releasing a video of himself driving a Tesla while not paying attention to the road.  This advertisement could confuse the vehicle’s warnings and the owner’s manual while potentially reducing driver compliance.  Indeed, a cursory online search yields dozens of videos showing Tesla owners driving while reading the newspaper, eating and drinking, or otherwise not engaged or paying attention to the road.

Part of the problem, according to some, is the terminology used to describe the technology.  For instance, German lawmakers recently banned Tesla from using the term “autopilot” in any of its advertisements in that country.  The idea is that, by using the term “autopilot,” drivers may overly rely on the technology and thereby increase the risk of a major accident.  Musk responded that such criticism was “idiotic” and that “if something goes wrong with autopilot, it’s because someone is misusing it and using it directly contrary to how we’ve said it should be used.”

The current legal framework will have to be carefully considered to keep pace with the changes in technology.

Regardless of who or what may be at fault for accidents involving autonomous cars, it is clear that the law on the subject will have to address this issue soon.  While technology changes quickly, legislation moves slowly. If the current law does not rise to meet the growing concerns of drivers, it may force the judicial system to make ad hoc decisions in the meantime.

Avatar photo
About Kaitlin Autrey (3 Articles)
Kaitlin is a third-year student at Campbell Law and is a Staff Writer for the Campbell Law Observer. Kaitlin is from western North Carolina and calls the mountains home. Prior to law school, Kaitlin received her undergraduate degree from the University of North Carolina at Greensboro where she majored in English Literature and Political Science. During college, Kaitlin worked for Guardian Ad Litem and developed a passion for working with children. Kaitlin has served as the president of Students Protecting Minors and volunteers with Capital Area Teen Court. This past summer she interned with Legal Aid of North Carolina as a Martin Luther King, Jr. Fellow and will resume her internship with a local family law office this fall. Her interests include child advocacy, family law, and education law.