UNIVERSITY of NOTRE DAME
Legal Liability for Artificial Intelligence and Potential Tort Reform
Kevin Lee
Introduction
In 2022, a smart chat engine called Chat GPT commenced the race of artificial intelligence (AI) among tech companies. As of today, the advancement of AI technologies has dominated the newspaper headlines. Due to the fierce competition, AI tools are becoming highly sophisticated. Instead of manually pulling up the weather forecast from a website, it is fairly convenient to ask Siri or Alexa to generate the latest weather news. In addition to that, the usage of autonomous driving technologies was embraced by many scholars, claiming that it will make our roads “accident-free.” AI technologies also reformed business entity’s decision-making processes. Shearman & Sterling LLP uses Kira, a document review software, to conduct merger & acquisition due diligence.
While AI has substantial potential to improve people’s lives, it also poses significant risks. As with any new technology, once AI has been adopted ubiquitously, there will be injuries, and some will result in lawsuits. A new issue arises along with this fast-developing industry: Who is liable for AI’s negligence? The unique characteristic of AI poses significant challenges to our current tort liability framework. Under the traditional view, the most fundamental feature of tort liability is negligence. The law requires one to behave prudently in a manner that conforms with the ordinary, typical member of their community. If one deviates from this standard of care and causes injury to somebody else, liability may well result. The second tort liability framework is product liability. The manufacturer would be liable if the product it created was designed defectively, manufactured defectively, or failed to warn its customers about potential dangers. Moreover, the legislating body could subject AI to a strict liability or vicarious liability model. Under the strict liability regime, the manufacturer or operator could be liable for the damage associated with their acts regardless of whether they were at fault. Under the various liability models, if AI is deemed to be an agent subject to the principal’s control, the principal, rather than AI’s creator, is going to be liable for injuries caused by AI technologies. Whether negligence, product liability, strict liability, or vicarious liability is the appropriate legal framework for compensating injuries is, therefore, a reflection of what we envision AI to be. If AI is deemed to be a “sophisticated calculator,” then we should run a negligence or vicarious liability analysis. If AI is deemed to have self-consciousness, then maybe product liability or strict liability matters. This distinction of the nature of AI is crucial because it determines whether individuals who are injured can receive remedies, and it determines who pays. This note, therefore, takes up the question of which, if any, of the current liability framework can successfully adapt to AI technologies.
Notre Dame Journal on Emerging Technologies ©2019