The Cavalier Daily
Serving the University Community Since 1890

SIEGEL: Assigning human moral attitudes to self-driving cars

We should not release autonomous vehicles in full force until we have considered the ethical implications

<p>Operating in coordination with three modules, self-driving cars are built to identify and perceive surrounding nearby objects when in motion, and respond accordingly.&nbsp;</p>

Operating in coordination with three modules, self-driving cars are built to identify and perceive surrounding nearby objects when in motion, and respond accordingly. 

In March of last year, a self-driving Uber car in Arizona struck and killed a pedestrian, representing the first of several deadly accidents involving autonomous vehicles since their conception. Operating in coordination with three safety modules, self-driving cars are built to identify and perceive surrounding nearby objects when in motion, and respond accordingly. In this instance, the car’s perception module failed, rendering it unable to identify the woman crossing the road with her bike — Ms. Herzberg died at the expense of a “confused” perception system, a system that is known for its inability to “distinguish a plastic bag from a flying child.” While the fault of this incident was ultimately due to a system-design failure, the growing number of fatal accidents begs us to consider more deeply the ethical dilemma inherent in the construction and operation of self-driving cars. To that end, we should not release autonomous vehicles in full force until we have considered the ethical implications. 

The potential promise of an entirely self-sufficient operating vehicle is real — the roads will be safer and free of inevitable human error, traffic congestion will clear and cars will be more efficient. According to a study by the Eno Centre for Transportation, if 90 percent of the cars on the road were self-driving, the annual number of accidents would drop by 4.7 million as well as a one-third drop in the number of accident-related deaths per year. That is, if all goes according to plan.

In 2015, a trio of researchers, realizing that some accidents are impossible to avoid when operating a motor vehicle with imperfect technology, attempted to unpack the challenges associated with programming an autonomous vehicle to confront moral dilemmas. They argue that, in order for self-driving cars to fulfill their promises of safer roads, they will have to expertly apply the methods of experimental ethics. When it comes to situations in which harm is entirely unavoidable, how will the AVs’ algorithm weigh up against the scales of morality? Essentially, these cars will have to choose, just as humans would, between running over pedestrians or swerving into obstructions and self-destructing. As AVs increase their presence on our roads, we have no choice but to adequately address the moral imperative to ensure a cohesive, societal moral attitude. According to researchers, the ultimate challenge is “adopting moral algorithms that align with human moral attitudes” — no easy feat to say the least.

When considering what values should guide the cars’ algorithms, we can look to leading philosophical theories regarding how we value human life and where we place ethical responsibility. The utilitarian moral doctrine calls for the least amount of harm for the greatest number of people. According to the study mentioned above, participants were comfortable with utilitarian AVs, “programmed to minimize an accident’s death toll.” However, when it comes to actually buying utilitarian-programmed cars themselves, participants were not confident in their willingness to do so. Thus, would the adoption of such a model of automation fly in the face of an individual’s self-interest to refrain from self-sacrifice in the event of unavoidable harm?

While respondents’ intuitions seem to be largely utilitarian with regards to driverless technology, we must also consider whether this estrangement from an egalitarian view of regulation is an externally valid application of ethics across all cultures. When researchers from MIT posed a series of moral dilemmas that could occur when a self-driving car fails to different cultural groups, they found that variations in the answers correlate strongly to cultural differences — “Respondents from collectivistic cultures, which ‘emphasize the respect that is due to older members of the community,’ showed a weaker preference for sparing younger people” than respondents from individualistic cultures.

Given that the results from this study of our general moral preferences illustrate a large degree of variation, even upon aggregation, we have to ask ourselves to what extent automakers and regulators should respect ethical features of self driving cars.

The current research on AVs focuses primarily on the fact that this technology is much safer than human drivers and saves us from inevitable human error. While this may be the case, we are still not asking the right question at the heart of the matter — how many accidents do human drivers avoid? And we are unable to answer this essential question because it is, in fact, unanswerable. Car manufacturers should not release AVs to freely cruise the streets until we have fully considered the ethical consequences — we would be remiss if we did not deliberate the potential risks of assigning moral attitudes to robot cars. 

Lucy Siegel is an Opinion Columnist and was an Opinion Editor for the 128th term of The Cavalier Daily. She can be reached at l.siegel@cavalierdaily.com.

Local Savings

Comments

Latest Video

Latest Podcast

Ahead of Lighting of the Lawn, Riley McNeill and Chelsea Huffman, co-chairs of the Lighting of the Lawn Committee and fourth-year College students, and Peter Mildrew, the president of the Hullabahoos and third-year Commerce student, discuss the festive tradition which brings the community together year after year. From planning the event to preparing performances, McNeil, Huffman and Mildrew elucidate how the light show has historically helped the community heal in the midst of hardship.