The dilemma behind autonomous vehicles: Creating morality laws to regulate self-driving cars

KXDW4W Empty cockpit of autonomous car, HUD(Head Up Display) and digital speedometer. self-driving vehicle.

Cutting-edge vehicles like the popular Tesla Model S come equipped with autonomous driving features granting them the ability to control themselves. Artificial intelligence enables the car to manage speed, direction, and adapt to traffic patterns—eliminating  the need for a human driver. However, are these machines capable of everything that a human driver can do? The answer is no; there is one important thing that these supercomputers cannot do—perform ethical decision making.

 

Psychologists and the like have primarily studied ethical decision making since the early nineteenth century. In 1967 Philippa Foot came up with the infamous Trolley Experiment. The Stanford Journal of Philosophy credits the experiment as a classical philosophical thought experiment. The trolley experiment illustrated a dilemma where the participant would imagine themselves as a conductor on a runaway train where the brakes have gone out. The driver of the “runaway train” could steer away from five potential victims, killing one instead. The goal behind the experiment was to discern if society preferred an approach that focused on promoting the greatest good for society even if it meant causing harm, or if participants would instead follow standard guideline rules for merely avoiding harm, even if it meant more damage was to be done as a whole.

 

A New Age Trolley Experiment

 

The theory has since been criticized by many for lacking in practicality, with critics arguing that hypotheticals cannot accurately gauge an individual’s actions when presented with the same choices in real life. But results tended to show that individuals were willing to sacrifice the life of one to save the group. These outlandish moral and ethical hypotheticals may seem far-fetched, but autonomous vehicles may have reintroduced these issues to the world’s roadways presenting legal issues for lawmakers and manufacturers. Imagine the following scenario and consider it as a new age trolley experiment.

 

A self-driving vehicle is proceeding down the roadway at the posted speed limit of 45 miles per hour. All of a sudden, a person steps into the roadway, but it is too late for the car to break in time. The vehicle must make an ethical decision—swerve off the road avoiding the person but possibly injuring the driver, or barrel forward seriously injuring the passerby, with less risk of injury for the driver. The hypothetical presents a tough decision, and there may be no right answer. The moral decision-making factors driving this decision may be different for everyone. Now consider this: what if the autonomous car must choose between swerving off the road avoiding a group of several people or instead directly barreling forward with less risk of injury to the driver? Does the decision to protect oneself become less critical when the risk of injury is now more significant for a large group of people? Does the current legal framework even allow flexibility for morality laws to be put in place for autonomous cars? Certain lawmakers seem to believe so.

 

Proposed legislation provides room for necessary federal changes

 

Proposed legislation seems to provide ample room for regulators to step in and make the necessary changes on a federal level. The United States House introduced the concept of federal pre-emption for autonomous cars on September 6, 2017, when they passed the SELF Drive Act(H.R. 3388). If eventually ratified into law, the SELF Drive Act contains several crucial aspects that impact morality laws for autonomous vehicles.

 

First, the Act expands the sole federal jurisdiction beyond motor vehicle safety to also encompass motor vehicle operations, thus prohibiting states from regulating the operators of highly automated vehicles.

 

Next, and perhaps most importantly in the context of morality laws, the Act officially marked the beginning of the process of updating vehicle safety standards to consider new autonomous vehicles. This Act requires the National Highway and Traffic Safety Administration(NHTSA) to study and develop measures necessary for autonomous cars. Arguably, this umbrella provides room for the NHTSA to incorporate morality laws into the federal law as part of updated vehicle safety standards.

 

Finally, the Act established an advisory committee to help guide and provide recommendations to DOT on automated vehicles. The Self Drive Act was received in the Senateand referred to the Committee on Commerce, Science, and Transportation for further review. If ratified into law, the government will be able to regulate the moral decision-making built into a car’s artificial intelligence. In the future, morality laws may play a role as one of several cutting-edge safeguards employed by self-driving autonomous vehicles such as the Tesla Model S. This futuristic utopia of self-driving cars may not be too far off.

 

Preparing for the future of transportation

 

On October 4, 2018, the U.S. Department of Transportation released their latest federal guidancefor automated vehicles entitled, “Preparing for the future of transportation: Automated Vehicles 3.0.” This doctrine, known as AV 3.0, reinforced the USDOT commitment to supporting safe integration of automation into the broad transportation system. The AV 3.0 commitment to safe integration of automation can likely be broadly construed to allow the government to create morality laws for autonomous vehicles, as the safeguard can be necessary to support the safe integration of the cars into the transportation system as a whole.

 

The AV 3.0fully supports the “development of automation-related standards, which can be an effective non-regulatory means to advance the integration of automation technologies.” This broad language could eventually be construed to grant the government the power to create and enforce morality laws on autonomous vehicles. Numerous standards and strict regulation for autonomous vehicles is nothing new.

 

The National Conference of State Legislatures prepares annual statistics on legislation regarding autonomous vehicles. Their records reportthat thirty-three states introduced legislation in 2017 for autonomous cars. Furthermore, because autonomous vehicles are now a federal question, federal law takes precedent on the matter over all state regulations. But the million-dollar question for lawmakers becomes—how will these morality laws be created? Morals differ around the country and are frequently changing. Should autonomous vehicles all be subjected to the same moral laws? Will some manufacturers manipulate the ethical decision making of their product as a marketing ploy? These are several of the problematic questions that lawmakers will face as they continue to interpret and create regulations for autonomous vehicles. However, some entrepreneurs have realized this need and have looked at a combination of human capital and artificial intelligence to try and solve these ethical dilemmas.

 

Artificial Intelligence combined with human-computer systems may solve the ethical problems lawmakers face when crafting autonomous vehicle regulations

 

Louis Rosenbergis a famous scientist who focuses on artificial intelligence and human-computer systems. His crowdsourced-based method of ethical decision-making known as Swarm AI may be useful in the context of morality-based laws for autonomous vehicles. As the current CEO of Unanimous AI, Rosenberg is working on using advanced artificial intelligence algorithms to connect people and enable them to answer questions together with one another on unique Swarm AI interfaces. Swarm allows analysts to review statistics with precision by tracking the personal confidence of each individual user’s response. Using the Swarm AI software, Rosenberg was able topoll a group of 20 horse racing experts to correctly predict the top four finishers at the latest Kentucky Derby. Independent of the Swarm interface, no expert was able to correctly pick the top 4 finishers. However, when acting in conjunction with one another the group’s accuracy improved tremendously.  Arguably, the Swarm technology can be one option used to collectively gauge responses to tough ethical questions, such as the new age trolley problem involving self-driving cars. While many questions remain, crowd-based solutions paired with artificial intelligence may be able to accurately make moral decisions that are representative of the population and serve as a basis for lawmakers when establishing morality based autonomous vehicle laws.

 

Current legislation has left room open for lawmakers to address the need for morality-based laws regarding autonomous vehicles. Tech-savvy entrepreneurs may be able to pair human capital with artificial intelligence to come up with answers to several of the age-old philosophical questions that may need to be answered by our nation’s autonomous cars and lawmakers.

 

 

 

Avatar photo
About Kevin Latshaw (3 Articles)
Kevin is a third-year student at the Campbell University School of Law and currently serves as the Editor-In-Chief for the Campbell Law Observer. Originally from Wilmington, North Carolina, Kevin majored in Communication Studies with a concentration in Marketing and a minor in Entrepreneurship and Innovation at The University of North Carolina Wilmington. Before starting law school, Kevin spent a summer in Washington, DC studying start-up law and regulatory framework for Venture Capitalism at the Duke Law D.C. Summer Institute on Law and Policy. During his second year of law school, Kevin and his co-counsel earned second place in the Richard T. Bowser Intramural Client Counseling Competition. Kevin is active on campus and serves as the President of the Professional Law Student Association, a third-year class Representative for the Student Bar Association, and Chair of the SBA Budget Committee. During the summer of his 2L year, Kevin studied International and Comparative Law at the University of Cape Coast, Ghana. During the Fall Semester of his third-year, Kevin was a Legal Extern in the General Counsel office of a large corporation in Raleigh, N.C. Currently, Kevin is one of the first students to work as a Certified Third-Year Practice Intern at the Innovate Capital Business Law Clinic assisting start-up clients with legal issues such as business entity formation, employee/contractor documentation, equity compensation plans and awards, commercial agreements such as NDAs and capital raising as well as other operational topics. Kevin has interests in transactional law, business law, as well as international law.
Contact: Email