UNIVERSITY of NOTRE DAME

Ethical AI in American Policing

Elizabeth E. Joh

Introduction

We know there are problems in the use of artificial intelligence in policing, but we don’t quite know what to do about them.  Artificial intelligence (AI) systems are becoming conventional and widespread in routine policing.  License plate reader systems routinely scan thousands of plates per minute.  At least 117 million Americans are included in databases where facial recognition searched are conducted. Predictive algorithms try to forecast future places or persons warranting law enforcement attention.  Autonomous drones can follow a suspect or record activity with the push of a button.  Increasingly the issue is not whether, but under what circumstances, these tools will be used. 

With artificial intelligence, the police can perform their traditional functions not just on a faster and larger scale, but in novel ways that have prompted strong criticism. Some of these issues are familiar to a legal audience. If the police can track everywhere you’ve been in public, what does that mean for the usual lack of constitutional protections in public spaces? If the police can easily identify every face in a public protest, how does that dampen free speech rights? Other voices in this backlash have arisen out of what has been called the algorithmic accountability movement: scholars and activists who have focused on the harms posed by the particulars of the technologies themselves. For instance, the now quite well-documented issue of racial and gender bias in many facial recognition technology programs means that the costs of mistaken matches are borne disproportionately by people of color and women. At the same time, law enforcement officials have embraced these technologies as promising innovations. Automation both in and around policing is growing, with few signs of slowing down.
One can also find many reports and white papers today offering principles for the responsible use of AI systems by governments, civil society organizations, and the private sector. Increasingly common too are calls for the fair use of artificial intelligence across fields like housing, employment, consumer credit, and criminal justice. This comes at a time when automated decision-making might determine whether you’ll be hired, whether you’ll be fired, whether you’ll receive one medical treatment over another, or whether you’ll be granted bail. In 2021, Congress established a National AI Advisory Committee, tasked with providing recommendations about the use of AI and its impact on society. The White House Office of Science and Technology Policy plans to publish an Algorithmic “Bill of Rights.” The European Union is preparing to adopt a comprehensive regulatory framework for the use of AI in 2022. 

Yet, largely missing from the current debate in the United States is a shared framework for thinking about the ethical and responsible use of AI that is specific to policing. Leading an average-sized law enforcement agency in the United States in the 2020s means responding to very different pressures: to reduce crime, to address bias and discrimination, to cut costs, and to innovate. In this context, AI systems offer tools that promise faster and more efficient methods of investigation and police administration. But their adoption into police decision-making and tactics also introduces complications. Any police department interested in guidelines for ethical use of AI systems would “find a field with few existing examples and no established guidelines or best practices.”

Commitments to ethical and responsible principles in the police use of AI have a role here. They aren’t substitutes for regulation or judicial decision-making. However, legislators and judges have been slow. The United States lacks a national, comprehensive approach to the regulation of AI systems. Instead, state and local governments have been left to decide whether and how to regulate AI systems either based on a particular industry or on specific use cases. Similarly, there have been a small number of cases challenging the use of AI systems in the courts, but not enough to conclude that a body of rules have been developed. This means that policing in particular is guided by an uncertain set of rules and legal decisions for the adoption and use of AI-based systems. And while ethical and legal principles share common concerns, ethical principles broaden the set of possible questions police departments should consider.

Many AI policy guidance documents exist now, but their value to the police is limited. Simply repeating broad principles about the responsible use of AI systems are less helpful than ones that 1) take into account the specific context of policing, and 2) consider the American experience of policing in particular. There is an emerging consensus about what ethical and responsible values should be part of AI systems. This essay considers what kind of ethical considerations can guide the use of AI systems by American police.

References

Professor of Law, U.C. Davis School of Law.  Thanks to the editorial staff of the Notre Dame Journal on Emerging Technologies for their editorial work, and to the inter-journal collaboration at Notre Dame Law School for organizing the Race & the Law: Interdisciplinary Perspectives symposium.  

Article by Don Howard

Notre Dame Journal on Emerging Technologies ©2020  

Scroll to Top