UNIVERSITY of NOTRE DAME


Artificial Intelligence Biases
Emile Loza de Siles
Abstract
Artificial intelligence (AI) bias, often called “algorithmic bias,” is a central focus of AI law and policy debates worldwide. Those discussions have persisted, and erroneously so, in conceptualizing AI bias as a greatly oversimplified and monolithic phenomenon, however. This treatment suggests that AI bias is capable of a relatively facile governance and regulatory approach. One and done, seems to be the idea.
AI biases, however, are many, complex, and often interoperating in their presentations throughout the AI lifecycle. A more scientific, process-oriented problem-solving approach to AI biases is needed to produce fact-based and actionable understandings with which to craft appropriate and effective AI governance and regulation regimes.
This Article adopts systems and process engineering as its guiding rigor and disaggregates the AI biases problem away from simplistic views of AI bias and toward the actionable discernment of individual AI biases. It profiles the AI biases problem space and its complexities. Drawing upon learnings from cognitive engineering and the ethical technology movement, the Article conceives of AI as a human-machine enterprise with human accountability at its core. It then maps out the lifecycle for that joint enterprise as an organizing framework for AI biases governance and control.
This Article then presents the first comprehensive compendium of fifty AI biases synthesized from the literatures of machine learning, AI, computer science, behavioral economics, statistics, epidemiology, psychology, law, and other disciplines; and it translates and interprets those informed understandings of AI biases and brings them into the legal literature. The law and the corresponding policy debates, however, have little and, as to the great majority of these AI biases, no experience with or understanding of them. The work of this Article, therefore, is all the more pressing as it offers the first rendering of these AI biases as actionable subjects for the creation and implementation of AU governance, public and private, and for the application and development of AI policy and law and of more factually-grounded legal theory.
Following its systems and process engineering approach, the Article organizes the compendium into a first-ever taxonomy of six AI bias categories based upon the domains within the AI lifecycle that those biases do or may impact and, accordingly, the domains that AI governance efforts should address. The Article identifies the AI biases within each category with definitions, descriptions, and AI use case exemplars. To begin to explain the interoperation of AI biases, the Article follows with a discussion of bias injection and other AI bias mechanisms.
With these contributions toward a thoroughly interdisciplinary and process-contextualized understanding of AI biases, this Article enables better informed and grounded AI policy debates, AI governance efforts, and developments of legal theory and law.

- Emerging Technology
Article by David Elder
- Emerging Technology
Article by Cora L. Sutherland & Melissa K. Scanlan
Notre Dame Journal on Emerging Technologies ©2019