UNIVERSITY of NOTRE DAME

Simon Chesterman, Artificial Intelligence and the Problem of Autonomy
1 Notre Dame J. Emerging Tech. 210 (2020)

Artificial Intelligence and the Problem of Autonomy

Article by Simon Chesterman

Artificial intelligence (AI) systems are routinely said to operate autonomously, exposing gaps in regulatory regimes that assume the centrality of human actors. Yet surprisingly little attention is given to precisely what is meant by “autonomy” and its relationship to those gaps. Driverless vehicles and autonomous weapon systems are the most widely studied examples, but related issues arise in algorithms that allocate resources or determine eligibility for programs in the private or public sector. This article develops a novel typology of autonomy that distinguishes three discrete regulatory challenges posed by AI systems: the practical difficulties of managing risk associated with new technologies, the morality of certain functions being undertaken by machines at all, and the legitimacy gap when public authorities delegate their powers to algorithms.

Introduction

On a moonless Sunday night in March 2018, Elaine Herzberg stepped off an ornamental median strip to cross Mill Avenue in Tempe, Arizona. It was just before 10:00 p.m., and the forty-nine-year-old homeless woman was pushing a bicycle laden with shopping bags. She had nearly made it to the other side of the four-lane road when an Uber test vehicle traveling at forty mph collided with her from the right. Ms. Herzberg, known to locals as “Ms. Elle,” was taken to hospital but died of her injuries, unwittingly finding a place in history as the first pedestrian death caused by a self-driving car.1

The Volvo XC90 that hit her was equipped with forward and side-facing cameras, radar and lidar (light detection and ranging), as well as navigation sensors and an integrated computing and data storage unit. A report by the U.S. National Transportation Safety Board (NTSB) concluded that the vehicle detected Ms. Herzberg, but that the software classified her as an unknown object, as a vehicle, and then as a bicycle with an uncertain future travel path. At 1.3 seconds before impact, the system determined that emergency braking was needed—but this had been disabled to reduce the potential for “erratic vehicle behavior.”2

It is still not entirely clear what went wrong on Mill Avenue that night. Uber removed its test vehicles from the four U.S. cities in which they had been operating, but eight months later they were back on the road—though now limited to 25 mph and no longer allowed to drive at night or in wet weather.3

A key feature of modern artificial intelligence (AI) is the ability to operate without human intervention.4 It is commonly said that such systems operate “autonomously.” As a preliminary matter, it is helpful to distinguish between automated and autonomous activities. Many vehicles have automated functions, such as cruise control which regulates speed. These functions are supervised by the driver, who remains in active control of the vehicle. Autonomous in this context means that the vehicle itself is capable of making decisions without input from the driver—indeed, there may be no “driver” at all.

The vehicle that killed Elaine Herzberg was operating autonomously, but it was not empty. Sitting in the driver’s seat was Rafaela Vasquez, hired by Uber as a safety driver. The safety driver was expected to intervene and take action if necessary, though the system was not designed to alert her. Police later determined that Ms. Vasquez had most likely been watching a streaming video—an episode of the televised singing competition “The Voice,” it seems—for the twenty minutes prior to the crash.  System data showed that, just before impact, she did reach for the steering wheel and applied the brakes about a second later—after hitting the pedestrian.5 Once the car had stopped, it was Ms. Vasquez who called 911 for assistance.

Who should be held responsible for such an incident: Uber? The “driver”? The company that made the AI system controlling the vehicle? The car itself? No one?6 The idea that no one should be held to account for the death of a pedestrian strikes most observers as wrong, yet hesitation as to the relative fault of the other parties suggests the need for greater clarity as to how that responsibility should be determined. As systems operating with varying degrees of autonomy become more sophisticated and more prevalent, that need will become more acute.

Though the problem of autonomy is commonly treated as a single quality of AI systems, this article develops a typology of autonomy that highlights three discrete sets of regulatory challenges, epitomized by three spheres of activity in which those systems display degrees of autonomous behavior.7

The first and most prominent is autonomous vehicles, the subject of Part I.8 Certain forms of transportation have long operated without active human control in limited circumstances—autopilot on planes while cruising, for example, or driverless light rail. As the level of autonomy has increased, however, and as vehicles such as driverless cars and buses interact with other road users, it is necessary to consider how existing rules on liability for damage may need to be adapted, and whether criminal laws that presume the presence of a driver need to be reviewed. Various jurisdictions in the United States and elsewhere are already experimenting with regulatory reform intended to reap the anticipated safety and efficiency benefits without exposing road users to unnecessary risk or unallocated losses.

The second example, discussed in Part II, is autonomous weapons.9 Where driverless cars and buses raise questions of liability and punishment for harm caused, lethal autonomous weapon systems pose discrete moral questions about the delegation of intentional life-and-death decisions to non-human processes. Concerns about autonomy in this context focus not only on how to manage risk, but also on whether such delegation should be permissible in any circumstances.

A third set of autonomous practices is less visible but more pervasive: decision-making by algorithm.10 Many routine decisions benefit from the processing power of computers; in cases where similar facts should lead to similar treatment, an algorithm may yield fair and consistent results. Yet when decisions affect the rights and obligations of individuals, automated decision-making processes risk treating their human subjects purely as means rather than ends. Part III argues that this calls into question the legitimacy of those decisions when made by public authorities in particular.

Each of these topics has been the subject of book length treatments.11 The aim here is not to attempt a complete study of their technical aspects, but to test the ability of existing regulatory structures to deal with autonomy more generally. Far from a single quality, these examples reveal discrete concerns about autonomous decision-making by AI systems: the practical challenges of managing risk associated with new technologies, the morality of certain decisions being made by ma-chines at all, and the legitimacy gap when public authorities delegate their powers to algorithms.

Click here to view the full text of this Article.

Article by Giovanna Massarotto

Notre Dame Journal on Emerging Technologies ©2020  

Scroll to Top