The Moral Dilemma of Driverless Cars

Testing of a self-driving minivan in Los Gatos neighborhood. 2017. (Anonymous/Wikimedia Commons)
Testing of a self-driving minivan in Los Gatos neighborhood. 2017. (Anonymous/Wikimedia Commons)

As companies like Uber, Google and Apple join traditional automakers in the race to produce self-driving cars, a future dominated by autonomous vehicles (AVs) is becoming a reality. From efficiency improvement to waste reduction, the benefits of AVs are limitless, and there is immense potential for AV’s in the future ecosystems of a sharing economy and smart cities.

However, despite the excitement surrounding the development of driverless cars, apprehensions remain about what they will mean for passengers on the road. One of the most commonly raised concerns is that system malfunctions could cause passenger and pedestrian deaths. In fact, last year, Uber made headlines when one of its driverless cars killed a woman in Arizona, causing the company to put its AV testing on pause. Although this is an isolated technical issue that will be fixed in time, it does beg the more pressing question  – how can AVs make decisions in unavoidable situations involving real lives? This question leads to what is known as the moral dilemma.

What is a moral dilemma?

At its most basic level, a moral dilemma involves “conflicts between moral requirements.” In relation to autonomous vehicles, these dilemmas arise during situations in which any course of action would cause damage or loss of life to at least one party. Take, for example, a vehicle whose brakes are not working. It is then faced with a choice of either turning left and hitting a motorcyclist, turning right and hitting a van with a family, or continuing straight and hitting a truck and killing the passenger.

This unfortunate decision brings up a conflict of morals that people may face as more AVs take to the streets. The circumstances have raised the question of insurance policy and who is liable for accidents. Currently, there is a relatively clear definition of when the driver is at fault. However, given the increasing autonomy of AVs, the question of liability is shifting towards the manufacturer and becoming more ambiguous. In a traditional car accident, the driver reacts out of pure reflex and is not held completely responsible for his or her actions. However, because AVs have an element of artificial intelligence, the person in charge of writing the car’s program code can pre-decide how the car will react in such situations. As a result, we are faced with the question of how to determine the best course of action. For example, when choosing which life to save, how should we factor in their demographics, function in society, and other important variables?

Tackling an Unsolvable Problem

In an effort to determine a universal moral code that could be applied to  such decisions, MIT conducted an online survey, known as the Moral Machine, in which they gathered a variety of perspectives by presenting participants with hypothetical decisions and asking them to choose how they would respond in each situation. After surveying over 2.3 million people worldwide, researchers found that only two principles were universal: to save multiple people over an individual and to save humans over animals. The lack of consensus of other preferences, including age and fault, indicates the flaw of using morals alone to solve the AV dilemma.

Additionally, there are other issues that arise regarding incentives and behavior.. For example, if cars are programmed to hit a motorcyclist who is wearing a helmet rather than one who isn’t to minimize damage, would that not punish and de-incentivize motorcyclists from wearing helmets? Furthermore, according to a study published in Science, while many say they want AVs to save pedestrians at the expense of the passenger, most admit they wouldn’t buy a car that does just that. As such, automakers are placed in a difficult position of deciding how exactly to program cars to win consumers, satisfy the public, and avoid legal issues.

Forging an Imperfect Path Forward

The solution to this may be to base norms generally on the ideology of utilitarianism, which is defined as doing the most good for the most number of people, leaving less room for debate and decreasing the liability and responsibility of manufacturers. However, there will need to be a degree of flexibility, which could be implemented by introducing an element of randomness in the code for unforseeable situations, and allowing case-by-base review where liability is unclear. As such, the legal framework will guide private companies as they manufacture AVs, allowing them to constantly improve their code.

As AVs become more widespread, it is important that legal rules are codified in soft law and adopted universally. So far, the few rare cases have been resolved on a case-by-case basis usually ending in settlements before actually reaching court. However, Germany has taken the lead in establishing formal guidelines, and the country’s Ethics Committee on Automated Driving has created an initial framework governing the program of self-driving vehicles. Some key ideas include the protection of human life as the top priority, a lack of distinction based on  personal features (age, gender, etc) in unavoidable accident situations, transparency on whether the human or computer is responsible for the driving task, documentation of the driver, and allowing drivers to have data sovereignty over usage of vehicle data. This is a promising first step, and countries should follow Germany’s lead and come together to establish common rules.

Ultimately AVs will decrease the number of accident-related injuries and deaths, and we must find a way to overcome the “moral dilemma” to prevent the fear of rare, hypothetical situations from blocking us from adopting this technology. Codifying and relying on standards and norms rather than morals or values can unify people and help us keep car manufacturers accountable and in agreement as we work together to adopt this new technology.

Comments

Leia Wang

Leia Wang is a senior in the World Bachelor in Business, a program in which she will earn three degrees from the University of Southern California, the Hong Kong University of Science and Technology, and the Universita Commerciale Luigi Bocconi. Over the past few years, she has consulted for CIEDS, the fifth largest NGO in Rio de Janeiro, studied at the Institute for European Studies in Brussels, interned for the U.S. State Department Foreign Service, and worked for Morgan Stanley in London. She also served as a delegate to the 2018 UN University Leadership Symposium and has volunteered in and visited over fifteen countries in six continents. Her main topics of interests are human rights, technology’s impact on society, and economic development in Western Europe and China.