UNIVERSITY of NOTRE DAME
An Education Theory of Fault For Autonomous Systems
Article by William D. Smart, Cindy M. Grimm & Woodrow Hartzog
2. A related concept is that of “duty” towards others. Restatement (Third) Of Torts: Physical & Emotional Harm § 7 (Am. L. Inst. 2010) (“An actor ordinarily has a duty to exercise reasonable care when the actor’s conduct creates a risk of physical harm.”); Those who breach their duty to others in the language of the tort of negligence are said to be at fault. David G. Owen, Philosophical Foundations of Fault in Tort Law, in Philosophical Foundations Of Tort Law 201 (David G. Owen ed., 1995); Re-statement (Third) Of Torts: Physical & Emotional Harm § 3 (2010); See also O.W. HOLMES, THE COMMON LAW 77, 96 (1881).
4. Neal E. Boudette, Tesla’s Self-Driving System Cleared in Deadly Crash, N.Y. Times (Jan. 19, 2017), https://www.nytimes.com/2017/01/19/business/tesla-model-s-autopilot-fatal-crash.html?_r=0.
6. See Bryant Walker Smith, Proximity-Driven Liability, 102 GEO. L.J. 1777, 1779 (2014) (“This Article argues that growing proximity could significantly expand sellers’ point-of-sale and post-sale obligations toward people endangered by their products.”). See also Andrew Selbst, Negligence and AI’s Human Users, 100 B.U. L. Rev. 1315 (2020); Re-becca Crootof, The Internet of Torts: Expanding Civil Liability Standards to Address Corporate Remote Interference, 69 Duke L.J. 583-667 (2019); Ryan Calo, Robotics and the Lessons of Cyberlaw, 103 Calif. L. Rev. 513, 555 (2015) [hereinafter Calo, Lessons of Cyberlaw] (“There will be situations, particularly as emergent systems interact with one another, wherein otherwise useful technology will legitimately surprise all involved. Should these systems prove deeply useful to society, as many envision, some other formulation than foreseeability may be necessary to assess liability.”); M. Ryan Calo, Open Robotics, 70 MD. L. Rev. 571, 582-83 (2011) [hereinafter Calo, Open Robotics]; “Tort law is ordinarily unwilling to let people injured through no fault of their own bear costs imposed by others. So the question then becomes, ‘Who pays?’ The only feasible approach, it would seem, would be to infer a defect of some kind on the theory that the accident itself is proof of defect, even if there is compelling evidence that cuts against a defect theory. There is precedent for courts making such an inference, which is simply a restatement of res ipsa loquitor.” Vladeck, supra note 5, at 128; Jack Boeglin, The Costs of Self-Driving Cars: Reconciling Freedom and Privacy with Tort Liability in Autonomous Vehicle Regulation, 17 Yale J. L. & Tech. 171, 175 (2015); Julie Goodrich, Driving Miss Daisy: An Autonomous Chauffeur System, 51 Hous. L. Rev. 265, 267 (2013).
Automated systems like self-driving cars and “smart” thermostats are a challenge for fault-based legal regimes like negligence because they have the potential to behave in unpredictable ways. How can people who build and deploy complex automated systems be said to be at fault when they could not have reasonably anticipated the behavior (and thus risk) of their tools?
Part of the problem is that the legal system has yet to settle on the language for identifying culpable behavior in the design and deployment for automated systems. In this article we offer an education theory of fault for autonomous systems—a new way to think about fault for all the relevant stakeholders who create and deploy “smart” technologies. We argue that the most important failures that lead autonomous systems to cause unpredictable harm are due to the lack of communication, clarity, and education between the procurer, developer, and users of these technologies.
In other words, while it is hard to exert meaningful control over automated systems to get them to act predictably, developers and procurers have great control over how much they test these tools and articulate their limits to all the other relevant parties. This makes testing and education one of the most legally relevant point of failures when automated systems harm people. By recognizing a responsibility to test and educate each other, foreseeable errors can be reduced, more accurate expectations can be set, and autonomous systems can be made more predictable and safer.
Introduction
In the American tort law system, the concept of “fault” is one principle for shifting the risk of loss in any given context.1 In other words, parties that are “at fault” due to their culpable behavior should be held liable for the harms they caused rather than requiring the victim of the harm to bear the loss. The concept of fault in legal regimes is often premised on failing to reasonably protect against foreseeable harms.2 Judges and lawmakers ask what control (allegedly) culpable actors have over the system and the risks associated with the exercise, or failure to exercise, that control. Complex autonomous systems like those in self-driving cars, “smart” thermostats, and autonomous warehouses present a difficult problem for fault-based legal regimes like torts for one simple reason: they have the potential to behave in unpredictable ways.3
How can people who build and deploy automated and intelligent systems be said to be at fault when they could not have reasonably anticipated the behavior (and thus risk) of an automated and intelligent system? As a very real case, the Tesla autopilot crash on May 7, 2016, that killed Joshua Brown was partly user fault (he was watching a movie) but also the fault of a sensor system that thought a truck was a cloud.4 Perhaps the software and hardware designers should have included this test case, but how many more cases like this are there, and how would you enumerate them? Is it fair to blame just the creators when automated systems act in unpredictable ways?
This dilemma has tied lawmakers, judges, and academics in knots. Some scholars have suggested that the difficulty in assessing fault in the design of automated, intelligent, and machine learning systems means a strict liability regime that holds producers liable for harm regardless of fault is preferable to an arbitrary or attenuated finding of fault.5 We agree with many of the valid reasons why strict liability for harms caused by autonomous systems might be the best approach. However, fault-based approaches might be more attractive to courts, lawmakers, and industry if the right framework and language were used to establish a basis of fault in the creation, deployment, and use of autonomous systems.
We think part of the problem with our discussion of fault is that we have yet to settle on the best approach and language to use to specifically target culpable behavior in the design and deployment for automated systems. The purpose of this paper is to offer an additional structured and nuanced way of thinking about the duties and culpable behavior of all the relevant stakeholders in the creation and deployment of autonomous systems. We argue that some of the most articulable failures in the creation and deployment of unpredictable systems lie in the lack of communication, clarity, and education between the procurer, developer, and users of automated systems. In other words, while it is hard to exert meaningful “control” over automated systems to get them to act predictably, developers and procurers have great control over how much they test and articulate the limits of an automated technology to all the other relevant parties. This makes education through testing and teaching one of the most legally relevant point of failures when automated systems harm people.
As part of our proposed framework for identifying culpable behavior, we identify four specific and foreseeable education-failure points in the creation, deployment, and use of automated systems which contribute to harm caused by the unpredictability of autonomous systems. These failures are Syntactic (failure of sensors to identify objects and actions—i.e., the truck is a cloud), Semantic (failure to correctly translate human intent into algorithms, i.e., detect all objects you might run into), Testing (failure to test the system in expected scenarios – i.e., testing back-lit white objects) and Warning (failure to clearly articulate the limitations of the system—i.e., tell the user that the sensors are not sufficient to operate with the sun in certain positions or that an automated car cannot be predictably operated off-road).
To clarify, we are not arguing that our theory of culpability is preferable in all contexts. We reiterate our belief that arguments to hold manufacturers of automated systems strictly liable for harm they cause in certain contexts are compelling. Our goal for this paper is much more modest. We simply aim to introduce another possible way to think about the justifications for the law to shift liability for harms related to the creation and use of automated systems. In doing so, we aim to add to the developing body of literature in this field that includes theories of strict liability, res ipsa loquiter, common carrier liability, and proximity-driven liability, among others.6
This article proceeds in three parts. In Part I, we highlight the control and predictability gap in autonomous systems. This part provides a brief background into the process of designing and implementing autonomous systems and highlights the limits of how predictable (and controllable) these systems can be in practice. This part also highlights how the limits of predictability pose problems for traditional fault-based legal regimes like tort law.
In Part II, we introduce our theory of fault as a failure to properly test and educate other stakeholders on the goals, limitations, and foreseeable errors of an automated system. Specifically, we articulate four education-failure points where lack of clarification between the stakeholders and identification of foreseeable errors results in automated systems (potentially) acting in harmful and unpredictable ways. These four education-failure points also provide a pathway for the stakeholders to provide or demand evidence of due diligence at a more fine-grained level by explicitly elucidating the actions they have or should have taken to reduce potential harm.
The relevant stakeholders in the development and use of any automated system are end-users (the people using the technology), procurers or merchants (the people putting together application-specific systems for the end-users), and developers (the people developing the low-level technology used in the systems).
There are at least four different kinds of educational failures that create problems in autonomous systems: Syntactic, Semantic, Testing, and Warning. These are not independent because education is a two-way street: For example, Semantic failures have as their counterpart Warning failures.
1) Syntactic failures occur when developers fail to communicate to procurers the many different ways in which robotic systems might fail to identify real world objects. This error occurs because of a mismatch between the precision of artificial sensors and the robustness of human senses. Developers must identify such possible failures and communicate them to procurers in order to provide a full picture of the many different implementation problems.
2) Semantic failures occur when the human-articulated goals and intentions for autonomous systems are not translated correctly into software. Semantic failure can occur both between developers and procurers, and procurers and end-users (procurers or end-users incorrectly expressing their requirements to the developer and procurer, respectively). Similar to the Syntactic failure, it is largely the procurer (developer)’s responsibility to educate the end-user (procurer) on how they have translated the end-user (procurer)’s human-language statements into algorithms, and to clarify vocabulary usage.
3) Testing failures occur when a necessary syntactic or semantic test is simply missing from the test set. This is largely the developer’s fault, but could also be the fault of the procurer for not fully articulating all of the desired use cases. Testing failures can also occur when the necessary syntactic or semantic tests are not conducted appropriately or are otherwise invalid.
4) Warning failures occur when users are not appropriately made aware of avoidable problems caused by the unpredictability of systems (the developer to the procurer or the procurer to the end-user). With respect to end-users, these failures are widely recognized in the law of product safety as a warning failure to be balanced with design defects. Warning failures are, in many ways, the inverse of semantic failures. Warning failures flow from developer to procurer to end-user, while semantic failures flow from procurer to developer.
Part III considers the impact of conceptualizing education-failures as culpable behavior. By articulating and isolating these educational failures, courts and lawmakers could better attribute responsibility among multiple actors working together to create complex and unpredictable autonomous systems. Such a framework could practically and legally allocate responsibility for testing, translation, and communication. It could also encourage more innovation, iteration, and realistic design and implementation that better sets and syncs with user expectations about how robotic systems will operate.
Automated systems will probably always find new ways to surprise us. But by recognizing the relevant parties’ responsibility to test and educate each other, foreseeable errors can be reduced, more accurate expectations can be set, and autonomous robots can be made more predictable and safer. We cannot “control” automated technologies in the traditional sense. But we can control how much we learn and teach each other about how these systems might work.
Click here to view the full text of this Article.
1. See Robert E. Keeton, Conditional Fault in the Law of Torts, 72 Harv. L. Rev. 401, 401–02 (1959) (“[C]ourts should leave a loss where they find it unless good reason for shifting it appears. . . . In modern Anglo-American tort law, fault has been considered the one generally acceptable reason for such loss shifting. For more than a century, at least, fault has been the principal theme of tort law.”); Brown v. Kendall, 60 Mass. 292, 298 (1850) (holding that defendants are liable for certain harms in tort only if they intended to cause the harm or if they are at fault in causing them).
3. See Harry Surden & Mary-Anne Williams, Technological Opacity, Predictability, and Self-Driving Cars, 38 Cardozo L. Rev. 121, 125 (2016).
5. David Vladeck has made a compelling case for strict liability for parties that cause harm via automated vehicles, writing: “There are four strong policy reasons to establish a strict liability regime for this category of cases. First, providing redress for persons injured through no fault of their own is an important value in its own right. . . . Second, a strict liability regime is warranted because, in contrast to the injured party, the vehicle’s creators are in a position to either absorb the costs, or through pricing decisions, to spread the burden of loss widely. . . . Third, a strict liability regime will spare all concerned the enormous transaction costs that would be expended if parties had to litigate liability issues involving driver-less cars where fault cannot be established. . . . And fourth, a predictable liability regime may better spur innovation than a less predictable system that depends on a quixotic search for, and then assignment of, fault.” David C. Vladeck, Machines Without Principals: Liability Rules and Artificial Intelligence, 89 Wash. L. Rev. 117, 146–47 (2014).
- Privacy Law & Data Protection
Article by Philip M. Nichols
Notre Dame Journal on Emerging Technologies ©2020