Mr. Roboto has rights, too? The rise of learning A.I.s and robots leads to considerations of robotic legal status and liability

With the use of Artificial Intelligence on the rise, legislators must chart new waters to deal with liability.

Isaac Aismov, writer of the famous science fiction novel I, Robot, created the three laws of robotics.  First, a robot may not injure a human being or, through inaction, allow a human being to come to harm.  Second, a robot must obey orders given to it by human beings except where such orders would conflict with the First Law.  Finally, a robot must protect its own existence as long as such protection does not conflict with the First or Second Law.  These laws were first introduced in his short story, “Runabout”, in 1942.

Fast forward to 2017, companies are creating more sophisticated learning Artificial Intelligence (A.I.) and robots.  These manufactured devices are becoming more autonomous and a greater part of everyday human life.  Meanwhile, governments around the world are trying to catch up to these advancements.

[S]ince robots are increasing in number, Europe will need to ensure that robots will remain in the service of humans.

The European Parliament, the voting body of the European Union, proposed a resolution based on a report from the Committee on Legal Affairs that gives ethical guidelines to developers as well as general framework of how the law should treat robots.  While this may seem to be too abstract and too soon, Luxemburgish Member of the European Parliament Mady Delvaux, author of the report, states that since robots are increasing in number, Europe will need to ensure that robots will remain in the service of humans.

The report tries to assess the standard of liability by suggesting a strict liability rule tempered by the level of autonomy the robot has.  Strict liability allows for the injured party to prove only that party being sued caused damages.  The report states that the future legislative body should in no way restrict the damages that victims may recover. The developers and companies that created the robots should be held strictly liable for their creation.  However, later in the report, the damages are balanced against the individual robot’s autonomy.

“[A] robot’s autonomy can be defined as the ability to take decisions and implement them in the outside world, independently of external control or influence.”  Individual autonomy is a cornerstone of western thought.  It is why we hold human individuals liable for their crimes and civil liabilities; they are able to make their own decisions. Governments then reward or punish those decisions based on the law.  Based on this principle and the definition of robotic autonomy, this suggests that robots will be analogous to humans.  By analogy, robots may have to be given rights.  Paragraph T of the report acknowledges this and questions whether robots are in current category of legal statuses “or whether a new category should be created, with its own specific features and implications as regards the attribution of rights and duties, including liability for damage.”

If strict liability is adopted as the rule for injuries caused by robots, companies may choose to slow down…

If strict liability is adopted as the rule for injuries caused by robots, companies may choose to slow down and even delay the production of more robots for fear of the legal costs they may incur.  To calm this fear, the report later suggests that when an injury does occur the company will be held liable based on the degree of autonomy of the robot.  Factors such as the amount of “education” from “teachers” and the learning capacity of the robot should be considered when assessing liability.  Once liability has been assessed the companies will pay via a mandatory insurance framework, which would be supplemented from a mandatory robotics fund.

The robotics fund would not only ensure that injured parties are compensated after litigation but also promote investments in the robotics.  However, one phrase stands out regarding the fund’s additional purposes.  The fund would also allow for, “…donations or payments made to smart autonomous robots for their services, which could be transferred to the fund.”  This suggests that robots may be paid in the future just like human laborers.  Their payments would not be for their own use but would be deposited into the fund to further robotic technology and the compensation of injury to humans.

Once the European Parliament adopts the resolution, it is up to the individual countries of the European Union to adopt the resolution.  Unlike the European Union, the United States has not universally adopted a way to view robots.  American law currently lacks the ability to deal with autonomous robots, seeing robots as things to be continuously programmed rather than being able to perform tasks on their own.

Currently, the most sophisticated of these learning A.I.s are self-driving cars and legal research bots.

Robots that will be autonomous enough to make decisions completely independent of human programmers are still decades away.  Until then, developers are still working on learning A.I.s that adapt to the world around them.  Currently, the most sophisticated of these learning A.I.s are self-driving cars and legal research bots.

Self-driving cars have been the dream of motorists ever since Ford rolled out the first Model T.  Google, Tesla, and other companies have developed programs into their cars that react to changing traffic patterns and human drivers.  These programs receive and process this information via system of devises from GPS, sensors, cameras, and rangefinders.  But with all new technology, there have been a couple of noted hiccups along the way.

Google’s first self-driving car accident happened in February 2015 when a Google Car assumed that a public transit bus would slow down while the car made a turn.  Since then the company reports that only ten accidents have occurred.

Tesla Motors… already had one fatal crash due to a miscalculation by the A.I. program.

Not all companies are so lucky.  Tesla Motors, founded by inventor Elon Musk, already had one fatal crash due to a miscalculation by the A.I. program.  Joshua Brown of Ohio, former U.S. Navy SEAL and the proud owner of a Tesla Model S, had set cruise control to 74 mph while driving in Florida.  Using the Tesla A.I. “Autopilot” Brown took his hands off the wheel and allowed the car to drive itself.  Unfortunately, Autopilot misjudged a tractor-trailer’s turn onto the highway causing the car to pass underneath the trailer, killing Brown instantly.

The National Highway Traffic Safety Administration declared in January, after several months of investigation, that Autopilot had performed as programed and that crossing traffic situations “are beyond the performance capabilities of the system.”  The National Transportation Safety Board, “Charged with determining the probable cause of transportation accidents”, is also investigating the crash but has not released any results.  Tesla says that it was not liable for the crash because the driver took his hands off the wheel and was not motoring the road, which the driver must agree to do before engaging the autopilot.

With the rise of self-driving cars and their incidents, lawsuits will need to be filed against these car companies.  Enter ROSS.  ROSS, which is not an acronym, is a legal research program developed by IBM based on their original learning A.I. Watson.  According to Jimoh Ovbiagele, co-founder of the program, “ROSS doesn’t retrieve thousands of documents for you to sift through. It gives you an evidence-based response.”  Large law firms such as Womble Caryle and Baker & Hostetler are already using ROSS.

Learning A.I.s and robots will eventually be part of every human being’s life.  There are some who view this as the next industrial revolution and scientific breakthrough. Asimov once said, “I do not fear computers. I fear the lack of them.”  Still others cannot help but wonder what will become of the human race as people are replaced with robots.  Movies and books are sold on the idea that some day human creations will turn on their creators.  As the monster in Mary Shelley’s Frankenstein said to his creator, “You are my creator, but I am your master.  Obey!”

Avatar photo
About Claire Scott (17 Articles)
Claire Scott is a third year law student and serves as a Senior Staff Writer for the Campbell Law Observer. Originally from Chesapeake, VA, Claire is a Campbell University alumi. After her 1L summer, she worked in the Harnett County NC District Attorney's Office as well as the District 11A Veteran Treatment Court. Her legal interests include estate planning and veteran law.