UNIVERSITY of NOTRE DAME

Addressing the Trolley Problem in the Twenty-First Century

[efn_reset][/efn_reset]The Trolley Problem is a common ethical decision making thought experiment first developed by an Oxford philosopher in 1967. 1 In the scenario, a runaway trolley is headed towards five unsuspecting people standing on the track. 2 On an alternate track, stands a single pedestrian. 3 A switch operator must decide whether or not to pull a switch that would force the trolley onto that alternate track, killing one person instead of five. 4 The purpose of the dilemma is to compel the interviewee to consider whether it is ethical for an individual to take action to save people’s lives when such action results in the death of someone else.

Third, this approach by itself does not account for the possibility of unethical laws. To understand the impact of unethical laws, we need not look any further than the legacy of World War II and the Nuremberg Trials in which Nazis were punished despite acting under the laws and requirements of their government. Following the Nuremberg Trials, courts in the United States have acknowledged that members of a community might be required “to violate domestic law in order to prevent . . . crimes against humanity.” 19 An ethical framework for autonomous machines based strictly on legal principles would not permit such deviations.

The ultimate answer, most likely, is that there is no single software panacea that can ensure AVs safely navigate every road encounter. Engineers will need to incorporate a number of decision-making frameworks in order to appropriately model human ethical decision making. They may need to incorporate legal principles as well as aspects of Scheutz’s two other decision-making approaches: modeling “human moral competence” in the machine” and implementing “one of the ethical theories proposed by philosophers (for example, virtue ethics, deontology, or consequentialism)” in the autonomous agent.20 The lives of cyclists depend on it.

 

Second, similar to the problems associated with developing a deontological framework for autonomous machines, it would require a significant amount of engineering work to explicitly codify all of the legal principles, and it would take the artificial agent a long time to search through all of the rules before it would be able to conclusively say that it is not violating one.18 An AV might not have the time to search through a large database of rules when it is forced to make split second decisions.

Third, this approach by itself does not account for the possibility of unethical laws. To understand the impact of unethical laws, we need not look any further than the legacy of World War II and the Nuremberg Trials in which Nazis were punished despite acting under the laws and requirements of their government. Following the Nuremberg Trials, courts in the United States have acknowledged that members of a community might be required “to violate domestic law in order to prevent . . . crimes against humanity.” 19 An ethical framework for autonomous machines based strictly on legal principles would not permit such deviations.

The ultimate answer, most likely, is that there is no single software panacea that can ensure AVs safely navigate every road encounter. Engineers will need to incorporate a number of decision-making frameworks in order to appropriately model human ethical decision making. They may need to incorporate legal principles as well as aspects of Scheutz’s two other decision-making approaches: modeling “human moral competence” in the machine” and implementing “one of the ethical theories proposed by philosophers (for example, virtue ethics, deontology, or consequentialism)” in the autonomous agent.20 The lives of cyclists depend on it.

 

There are several challenges, however, to this approach. First, it would require “making legal terms such as ‘intent’ or ‘imminent’ or ‘distress’ computational.” 15 Algorithms would be needed that could “detect intent” and perceive “imminence or distressed emotional states.” 16 It would require “formalizing the legal concept of a ‘rational person,’” which is something upon which not even legal scholars agree. 17

Second, similar to the problems associated with developing a deontological framework for autonomous machines, it would require a significant amount of engineering work to explicitly codify all of the legal principles, and it would take the artificial agent a long time to search through all of the rules before it would be able to conclusively say that it is not violating one.18 An AV might not have the time to search through a large database of rules when it is forced to make split second decisions.

Third, this approach by itself does not account for the possibility of unethical laws. To understand the impact of unethical laws, we need not look any further than the legacy of World War II and the Nuremberg Trials in which Nazis were punished despite acting under the laws and requirements of their government. Following the Nuremberg Trials, courts in the United States have acknowledged that members of a community might be required “to violate domestic law in order to prevent . . . crimes against humanity.” 19 An ethical framework for autonomous machines based strictly on legal principles would not permit such deviations.

The ultimate answer, most likely, is that there is no single software panacea that can ensure AVs safely navigate every road encounter. Engineers will need to incorporate a number of decision-making frameworks in order to appropriately model human ethical decision making. They may need to incorporate legal principles as well as aspects of Scheutz’s two other decision-making approaches: modeling “human moral competence” in the machine” and implementing “one of the ethical theories proposed by philosophers (for example, virtue ethics, deontology, or consequentialism)” in the autonomous agent.20 The lives of cyclists depend on it.

 

The first approach proposed by Scheutz for developing an explicit ethical agent is to program legal principles into the software. 11 Autonomous vehicles already apply this approach, to an extent, because they are programmed to drive the speed limit, stay on the right side of the road, and stop at stop signs. Going beyond traffic violations, Scheutz suggests programming machines with the law of intentional torts which includes offenses like false imprisonment, battery, assault, and the aforementioned infliction of emotional distress. 12 The law of torts is a logical place to begin when preparing machines for human interactions because it penalizes the “breach of a duty” that is imposed “on persons who stand in a particular relation to one another.” 13 The law of intentional torts, specifically, “is concerned with the duty to abstain from causing a willful or intentional injury to others.” 14 If we think that the law of intentional torts, as created by courts and state laws, represents society’s ethical beliefs on how humans should interact with one another, then it is an appropriate place to begin in codifying basic human ethical principles into an autonomous machine. If we believe, for example, that it is unethical for a human to strike another human, then it is necessary that an autonomous machine be prevented from striking another human if we want that machine to model human ethics.

There are several challenges, however, to this approach. First, it would require “making legal terms such as ‘intent’ or ‘imminent’ or ‘distress’ computational.” 15 Algorithms would be needed that could “detect intent” and perceive “imminence or distressed emotional states.” 16 It would require “formalizing the legal concept of a ‘rational person,’” which is something upon which not even legal scholars agree. 17

Second, similar to the problems associated with developing a deontological framework for autonomous machines, it would require a significant amount of engineering work to explicitly codify all of the legal principles, and it would take the artificial agent a long time to search through all of the rules before it would be able to conclusively say that it is not violating one.18 An AV might not have the time to search through a large database of rules when it is forced to make split second decisions.

Third, this approach by itself does not account for the possibility of unethical laws. To understand the impact of unethical laws, we need not look any further than the legacy of World War II and the Nuremberg Trials in which Nazis were punished despite acting under the laws and requirements of their government. Following the Nuremberg Trials, courts in the United States have acknowledged that members of a community might be required “to violate domestic law in order to prevent . . . crimes against humanity.” 19 An ethical framework for autonomous machines based strictly on legal principles would not permit such deviations.

The ultimate answer, most likely, is that there is no single software panacea that can ensure AVs safely navigate every road encounter. Engineers will need to incorporate a number of decision-making frameworks in order to appropriately model human ethical decision making. They may need to incorporate legal principles as well as aspects of Scheutz’s two other decision-making approaches: modeling “human moral competence” in the machine” and implementing “one of the ethical theories proposed by philosophers (for example, virtue ethics, deontology, or consequentialism)” in the autonomous agent.20 The lives of cyclists depend on it.

 

What can engineers and computer scientists possibly do to help mitigate the ethical quandaries provided by the Trolley Problem? The answer, according to Tufts cognitive and computer scientist Matthias Scheutz, is to develop autonomous machines that are explicit ethical agents possessing the abilities to apply “explicit representations about ethical principles . . . in a variety of situations,” to “handle new situations not anticipated by their designers and make sensitive determinations about what should be done,” and to express their “reasoning in natural language.” 9 Explicit ethical agents can be contrasted with implicit ethical agents, which rely only upon “the interplay of existing algorithms in the architecture,” like preprogrammed obstacle avoidance, to make decisions. 10

The first approach proposed by Scheutz for developing an explicit ethical agent is to program legal principles into the software. 11 Autonomous vehicles already apply this approach, to an extent, because they are programmed to drive the speed limit, stay on the right side of the road, and stop at stop signs. Going beyond traffic violations, Scheutz suggests programming machines with the law of intentional torts which includes offenses like false imprisonment, battery, assault, and the aforementioned infliction of emotional distress. 12 The law of torts is a logical place to begin when preparing machines for human interactions because it penalizes the “breach of a duty” that is imposed “on persons who stand in a particular relation to one another.” 13 The law of intentional torts, specifically, “is concerned with the duty to abstain from causing a willful or intentional injury to others.” 14 If we think that the law of intentional torts, as created by courts and state laws, represents society’s ethical beliefs on how humans should interact with one another, then it is an appropriate place to begin in codifying basic human ethical principles into an autonomous machine. If we believe, for example, that it is unethical for a human to strike another human, then it is necessary that an autonomous machine be prevented from striking another human if we want that machine to model human ethics.

There are several challenges, however, to this approach. First, it would require “making legal terms such as ‘intent’ or ‘imminent’ or ‘distress’ computational.” 15 Algorithms would be needed that could “detect intent” and perceive “imminence or distressed emotional states.” 16 It would require “formalizing the legal concept of a ‘rational person,’” which is something upon which not even legal scholars agree. 17

Second, similar to the problems associated with developing a deontological framework for autonomous machines, it would require a significant amount of engineering work to explicitly codify all of the legal principles, and it would take the artificial agent a long time to search through all of the rules before it would be able to conclusively say that it is not violating one.18 An AV might not have the time to search through a large database of rules when it is forced to make split second decisions.

Third, this approach by itself does not account for the possibility of unethical laws. To understand the impact of unethical laws, we need not look any further than the legacy of World War II and the Nuremberg Trials in which Nazis were punished despite acting under the laws and requirements of their government. Following the Nuremberg Trials, courts in the United States have acknowledged that members of a community might be required “to violate domestic law in order to prevent . . . crimes against humanity.” 19 An ethical framework for autonomous machines based strictly on legal principles would not permit such deviations.

The ultimate answer, most likely, is that there is no single software panacea that can ensure AVs safely navigate every road encounter. Engineers will need to incorporate a number of decision-making frameworks in order to appropriately model human ethical decision making. They may need to incorporate legal principles as well as aspects of Scheutz’s two other decision-making approaches: modeling “human moral competence” in the machine” and implementing “one of the ethical theories proposed by philosophers (for example, virtue ethics, deontology, or consequentialism)” in the autonomous agent.20 The lives of cyclists depend on it.

 

Disregarding actual physical injury, the simple shift of risk from the AV to those around it could be considered, by itself, an “injury-in-fact.” Tort law arose out of the principle that each human has a right to bodily integrity and is entitled to compensation when another person breaches a societal norm and violates that bodily integrity. 7 Recently, courts have recognized a right to emotional integrity as well and granted compensation to victims who found themselves within the “zone of danger” of a negligent act.8 Such a shift suggests that people not only have a right to not be physically injured, but a right to not be placed in a position where they fear bodily injury. Under this theory, an AV decision making framework that places greater risk of collision upon a specific subset of society, e.g. cyclists, injures the members of the subset of society by causing them to fear for their bodily integrity when they encounter autonomous vehicles on the road. If cyclists knew that AVs were programmed to drive closer to them, they would have good reason to fear the AVs driving past them.

What can engineers and computer scientists possibly do to help mitigate the ethical quandaries provided by the Trolley Problem? The answer, according to Tufts cognitive and computer scientist Matthias Scheutz, is to develop autonomous machines that are explicit ethical agents possessing the abilities to apply “explicit representations about ethical principles . . . in a variety of situations,” to “handle new situations not anticipated by their designers and make sensitive determinations about what should be done,” and to express their “reasoning in natural language.” 9 Explicit ethical agents can be contrasted with implicit ethical agents, which rely only upon “the interplay of existing algorithms in the architecture,” like preprogrammed obstacle avoidance, to make decisions. 10

The first approach proposed by Scheutz for developing an explicit ethical agent is to program legal principles into the software. 11 Autonomous vehicles already apply this approach, to an extent, because they are programmed to drive the speed limit, stay on the right side of the road, and stop at stop signs. Going beyond traffic violations, Scheutz suggests programming machines with the law of intentional torts which includes offenses like false imprisonment, battery, assault, and the aforementioned infliction of emotional distress. 12 The law of torts is a logical place to begin when preparing machines for human interactions because it penalizes the “breach of a duty” that is imposed “on persons who stand in a particular relation to one another.” 13 The law of intentional torts, specifically, “is concerned with the duty to abstain from causing a willful or intentional injury to others.” 14 If we think that the law of intentional torts, as created by courts and state laws, represents society’s ethical beliefs on how humans should interact with one another, then it is an appropriate place to begin in codifying basic human ethical principles into an autonomous machine. If we believe, for example, that it is unethical for a human to strike another human, then it is necessary that an autonomous machine be prevented from striking another human if we want that machine to model human ethics.

There are several challenges, however, to this approach. First, it would require “making legal terms such as ‘intent’ or ‘imminent’ or ‘distress’ computational.” 15 Algorithms would be needed that could “detect intent” and perceive “imminence or distressed emotional states.” 16 It would require “formalizing the legal concept of a ‘rational person,’” which is something upon which not even legal scholars agree. 17

Second, similar to the problems associated with developing a deontological framework for autonomous machines, it would require a significant amount of engineering work to explicitly codify all of the legal principles, and it would take the artificial agent a long time to search through all of the rules before it would be able to conclusively say that it is not violating one.18 An AV might not have the time to search through a large database of rules when it is forced to make split second decisions.

Third, this approach by itself does not account for the possibility of unethical laws. To understand the impact of unethical laws, we need not look any further than the legacy of World War II and the Nuremberg Trials in which Nazis were punished despite acting under the laws and requirements of their government. Following the Nuremberg Trials, courts in the United States have acknowledged that members of a community might be required “to violate domestic law in order to prevent . . . crimes against humanity.” 19 An ethical framework for autonomous machines based strictly on legal principles would not permit such deviations.

The ultimate answer, most likely, is that there is no single software panacea that can ensure AVs safely navigate every road encounter. Engineers will need to incorporate a number of decision-making frameworks in order to appropriately model human ethical decision making. They may need to incorporate legal principles as well as aspects of Scheutz’s two other decision-making approaches: modeling “human moral competence” in the machine” and implementing “one of the ethical theories proposed by philosophers (for example, virtue ethics, deontology, or consequentialism)” in the autonomous agent.20 The lives of cyclists depend on it.

 

Humans, in the previous scenario, might make different decisions based upon their own moral foundations and perceptions of the situation. A person who places more value on personal safety is likely to drive farther from the truck and closer to the cyclist. A driver who values the safety of others is likely to move closer to the truck. As a result of the variation of moral foundations from human to human, the injuries will be somewhat split between cyclists and vehicle operators. Assume, however, that all AVs are programmed, for example, to place their own safety above all other principles. All AVs, therefore, would drive closer to the cyclist and shift the risk of the scenario upon him or her. This could result in a world in which cyclists are statistically more likely to be killed than owners of autonomous vehicles. Perhaps, they would even be five times more likely to be killed than owners of autonomous vehicles. On a much larger scale, we thus find ourselves back within the constraints of the Trolley Problem. And this time, the switch operator has decided to let the trolley kill the five pedestrians.

Disregarding actual physical injury, the simple shift of risk from the AV to those around it could be considered, by itself, an “injury-in-fact.” Tort law arose out of the principle that each human has a right to bodily integrity and is entitled to compensation when another person breaches a societal norm and violates that bodily integrity. 7 Recently, courts have recognized a right to emotional integrity as well and granted compensation to victims who found themselves within the “zone of danger” of a negligent act.8 Such a shift suggests that people not only have a right to not be physically injured, but a right to not be placed in a position where they fear bodily injury. Under this theory, an AV decision making framework that places greater risk of collision upon a specific subset of society, e.g. cyclists, injures the members of the subset of society by causing them to fear for their bodily integrity when they encounter autonomous vehicles on the road. If cyclists knew that AVs were programmed to drive closer to them, they would have good reason to fear the AVs driving past them.

What can engineers and computer scientists possibly do to help mitigate the ethical quandaries provided by the Trolley Problem? The answer, according to Tufts cognitive and computer scientist Matthias Scheutz, is to develop autonomous machines that are explicit ethical agents possessing the abilities to apply “explicit representations about ethical principles . . . in a variety of situations,” to “handle new situations not anticipated by their designers and make sensitive determinations about what should be done,” and to express their “reasoning in natural language.” 9 Explicit ethical agents can be contrasted with implicit ethical agents, which rely only upon “the interplay of existing algorithms in the architecture,” like preprogrammed obstacle avoidance, to make decisions. 10

The first approach proposed by Scheutz for developing an explicit ethical agent is to program legal principles into the software. 11 Autonomous vehicles already apply this approach, to an extent, because they are programmed to drive the speed limit, stay on the right side of the road, and stop at stop signs. Going beyond traffic violations, Scheutz suggests programming machines with the law of intentional torts which includes offenses like false imprisonment, battery, assault, and the aforementioned infliction of emotional distress. 12 The law of torts is a logical place to begin when preparing machines for human interactions because it penalizes the “breach of a duty” that is imposed “on persons who stand in a particular relation to one another.” 13 The law of intentional torts, specifically, “is concerned with the duty to abstain from causing a willful or intentional injury to others.” 14 If we think that the law of intentional torts, as created by courts and state laws, represents society’s ethical beliefs on how humans should interact with one another, then it is an appropriate place to begin in codifying basic human ethical principles into an autonomous machine. If we believe, for example, that it is unethical for a human to strike another human, then it is necessary that an autonomous machine be prevented from striking another human if we want that machine to model human ethics.

There are several challenges, however, to this approach. First, it would require “making legal terms such as ‘intent’ or ‘imminent’ or ‘distress’ computational.” 15 Algorithms would be needed that could “detect intent” and perceive “imminence or distressed emotional states.” 16 It would require “formalizing the legal concept of a ‘rational person,’” which is something upon which not even legal scholars agree. 17

Second, similar to the problems associated with developing a deontological framework for autonomous machines, it would require a significant amount of engineering work to explicitly codify all of the legal principles, and it would take the artificial agent a long time to search through all of the rules before it would be able to conclusively say that it is not violating one.18 An AV might not have the time to search through a large database of rules when it is forced to make split second decisions.

Third, this approach by itself does not account for the possibility of unethical laws. To understand the impact of unethical laws, we need not look any further than the legacy of World War II and the Nuremberg Trials in which Nazis were punished despite acting under the laws and requirements of their government. Following the Nuremberg Trials, courts in the United States have acknowledged that members of a community might be required “to violate domestic law in order to prevent . . . crimes against humanity.” 19 An ethical framework for autonomous machines based strictly on legal principles would not permit such deviations.

The ultimate answer, most likely, is that there is no single software panacea that can ensure AVs safely navigate every road encounter. Engineers will need to incorporate a number of decision-making frameworks in order to appropriately model human ethical decision making. They may need to incorporate legal principles as well as aspects of Scheutz’s two other decision-making approaches: modeling “human moral competence” in the machine” and implementing “one of the ethical theories proposed by philosophers (for example, virtue ethics, deontology, or consequentialism)” in the autonomous agent.20 The lives of cyclists depend on it.

 

Imagine a scenario in which an AV is driving down the road and must pass between a cyclist on its right and a large truck on its left. Just like a human operator, the machine must make a decision on how to pass between the two vehicles. If the AV elects to pass closer to the bicycle, then the cyclist assumes a greater risk of being injured in a collision with the AV. If the AV decides to move closer to the truck, then the AV assumes greater risk of being damaged or exposing its passengers to injury in a collision with the truck. 6 It is possible that the AV navigates this situation without any party being injured. However, it is also possible that when it shifts towards the cyclist, the cyclist has less room to maneuver and collides with a subsequent obstacle. The damage may be negligible if this situation were to occur infrequently; however, if there are thousands of AVs on the road interacting with thousands of cyclists, the encounters are likely to be more numerous and the injuries are likely to add up.

Humans, in the previous scenario, might make different decisions based upon their own moral foundations and perceptions of the situation. A person who places more value on personal safety is likely to drive farther from the truck and closer to the cyclist. A driver who values the safety of others is likely to move closer to the truck. As a result of the variation of moral foundations from human to human, the injuries will be somewhat split between cyclists and vehicle operators. Assume, however, that all AVs are programmed, for example, to place their own safety above all other principles. All AVs, therefore, would drive closer to the cyclist and shift the risk of the scenario upon him or her. This could result in a world in which cyclists are statistically more likely to be killed than owners of autonomous vehicles. Perhaps, they would even be five times more likely to be killed than owners of autonomous vehicles. On a much larger scale, we thus find ourselves back within the constraints of the Trolley Problem. And this time, the switch operator has decided to let the trolley kill the five pedestrians.

Disregarding actual physical injury, the simple shift of risk from the AV to those around it could be considered, by itself, an “injury-in-fact.” Tort law arose out of the principle that each human has a right to bodily integrity and is entitled to compensation when another person breaches a societal norm and violates that bodily integrity. 7 Recently, courts have recognized a right to emotional integrity as well and granted compensation to victims who found themselves within the “zone of danger” of a negligent act.8 Such a shift suggests that people not only have a right to not be physically injured, but a right to not be placed in a position where they fear bodily injury. Under this theory, an AV decision making framework that places greater risk of collision upon a specific subset of society, e.g. cyclists, injures the members of the subset of society by causing them to fear for their bodily integrity when they encounter autonomous vehicles on the road. If cyclists knew that AVs were programmed to drive closer to them, they would have good reason to fear the AVs driving past them.

What can engineers and computer scientists possibly do to help mitigate the ethical quandaries provided by the Trolley Problem? The answer, according to Tufts cognitive and computer scientist Matthias Scheutz, is to develop autonomous machines that are explicit ethical agents possessing the abilities to apply “explicit representations about ethical principles . . . in a variety of situations,” to “handle new situations not anticipated by their designers and make sensitive determinations about what should be done,” and to express their “reasoning in natural language.” 9 Explicit ethical agents can be contrasted with implicit ethical agents, which rely only upon “the interplay of existing algorithms in the architecture,” like preprogrammed obstacle avoidance, to make decisions. 10

The first approach proposed by Scheutz for developing an explicit ethical agent is to program legal principles into the software. 11 Autonomous vehicles already apply this approach, to an extent, because they are programmed to drive the speed limit, stay on the right side of the road, and stop at stop signs. Going beyond traffic violations, Scheutz suggests programming machines with the law of intentional torts which includes offenses like false imprisonment, battery, assault, and the aforementioned infliction of emotional distress. 12 The law of torts is a logical place to begin when preparing machines for human interactions because it penalizes the “breach of a duty” that is imposed “on persons who stand in a particular relation to one another.” 13 The law of intentional torts, specifically, “is concerned with the duty to abstain from causing a willful or intentional injury to others.” 14 If we think that the law of intentional torts, as created by courts and state laws, represents society’s ethical beliefs on how humans should interact with one another, then it is an appropriate place to begin in codifying basic human ethical principles into an autonomous machine. If we believe, for example, that it is unethical for a human to strike another human, then it is necessary that an autonomous machine be prevented from striking another human if we want that machine to model human ethics.

There are several challenges, however, to this approach. First, it would require “making legal terms such as ‘intent’ or ‘imminent’ or ‘distress’ computational.” 15 Algorithms would be needed that could “detect intent” and perceive “imminence or distressed emotional states.” 16 It would require “formalizing the legal concept of a ‘rational person,’” which is something upon which not even legal scholars agree. 17

Second, similar to the problems associated with developing a deontological framework for autonomous machines, it would require a significant amount of engineering work to explicitly codify all of the legal principles, and it would take the artificial agent a long time to search through all of the rules before it would be able to conclusively say that it is not violating one.18 An AV might not have the time to search through a large database of rules when it is forced to make split second decisions.

Third, this approach by itself does not account for the possibility of unethical laws. To understand the impact of unethical laws, we need not look any further than the legacy of World War II and the Nuremberg Trials in which Nazis were punished despite acting under the laws and requirements of their government. Following the Nuremberg Trials, courts in the United States have acknowledged that members of a community might be required “to violate domestic law in order to prevent . . . crimes against humanity.” 19 An ethical framework for autonomous machines based strictly on legal principles would not permit such deviations.

The ultimate answer, most likely, is that there is no single software panacea that can ensure AVs safely navigate every road encounter. Engineers will need to incorporate a number of decision-making frameworks in order to appropriately model human ethical decision making. They may need to incorporate legal principles as well as aspects of Scheutz’s two other decision-making approaches: modeling “human moral competence” in the machine” and implementing “one of the ethical theories proposed by philosophers (for example, virtue ethics, deontology, or consequentialism)” in the autonomous agent.20 The lives of cyclists depend on it.

 

Some people may dismiss the Trolley Problem as too implausible to have real world implications. An autonomous vehicle could stop, swerve in a different direction, or drive cautiously enough to avoid such predicaments. The trolley in the original problem was, as you remember, out of control. However, consider a world in which, instead of just one autonomous vehicle facing the trolley dilemma, there are millions of autonomous vehicles on the road every day encountering similar situations. Even if these vehicles are not making a binary choice between who lives or dies, the decisions that AVs make will shift the risk of collision between them and the vehicles and pedestrians around them. When thousands of these decisions are made every day by AVs, accidents will happen and people will be injured by no fault of their own.

Imagine a scenario in which an AV is driving down the road and must pass between a cyclist on its right and a large truck on its left. Just like a human operator, the machine must make a decision on how to pass between the two vehicles. If the AV elects to pass closer to the bicycle, then the cyclist assumes a greater risk of being injured in a collision with the AV. If the AV decides to move closer to the truck, then the AV assumes greater risk of being damaged or exposing its passengers to injury in a collision with the truck. 6 It is possible that the AV navigates this situation without any party being injured. However, it is also possible that when it shifts towards the cyclist, the cyclist has less room to maneuver and collides with a subsequent obstacle. The damage may be negligible if this situation were to occur infrequently; however, if there are thousands of AVs on the road interacting with thousands of cyclists, the encounters are likely to be more numerous and the injuries are likely to add up.

Humans, in the previous scenario, might make different decisions based upon their own moral foundations and perceptions of the situation. A person who places more value on personal safety is likely to drive farther from the truck and closer to the cyclist. A driver who values the safety of others is likely to move closer to the truck. As a result of the variation of moral foundations from human to human, the injuries will be somewhat split between cyclists and vehicle operators. Assume, however, that all AVs are programmed, for example, to place their own safety above all other principles. All AVs, therefore, would drive closer to the cyclist and shift the risk of the scenario upon him or her. This could result in a world in which cyclists are statistically more likely to be killed than owners of autonomous vehicles. Perhaps, they would even be five times more likely to be killed than owners of autonomous vehicles. On a much larger scale, we thus find ourselves back within the constraints of the Trolley Problem. And this time, the switch operator has decided to let the trolley kill the five pedestrians.

Disregarding actual physical injury, the simple shift of risk from the AV to those around it could be considered, by itself, an “injury-in-fact.” Tort law arose out of the principle that each human has a right to bodily integrity and is entitled to compensation when another person breaches a societal norm and violates that bodily integrity. 7 Recently, courts have recognized a right to emotional integrity as well and granted compensation to victims who found themselves within the “zone of danger” of a negligent act.8 Such a shift suggests that people not only have a right to not be physically injured, but a right to not be placed in a position where they fear bodily injury. Under this theory, an AV decision making framework that places greater risk of collision upon a specific subset of society, e.g. cyclists, injures the members of the subset of society by causing them to fear for their bodily integrity when they encounter autonomous vehicles on the road. If cyclists knew that AVs were programmed to drive closer to them, they would have good reason to fear the AVs driving past them.

What can engineers and computer scientists possibly do to help mitigate the ethical quandaries provided by the Trolley Problem? The answer, according to Tufts cognitive and computer scientist Matthias Scheutz, is to develop autonomous machines that are explicit ethical agents possessing the abilities to apply “explicit representations about ethical principles . . . in a variety of situations,” to “handle new situations not anticipated by their designers and make sensitive determinations about what should be done,” and to express their “reasoning in natural language.” 9 Explicit ethical agents can be contrasted with implicit ethical agents, which rely only upon “the interplay of existing algorithms in the architecture,” like preprogrammed obstacle avoidance, to make decisions. 10

The first approach proposed by Scheutz for developing an explicit ethical agent is to program legal principles into the software. 11 Autonomous vehicles already apply this approach, to an extent, because they are programmed to drive the speed limit, stay on the right side of the road, and stop at stop signs. Going beyond traffic violations, Scheutz suggests programming machines with the law of intentional torts which includes offenses like false imprisonment, battery, assault, and the aforementioned infliction of emotional distress. 12 The law of torts is a logical place to begin when preparing machines for human interactions because it penalizes the “breach of a duty” that is imposed “on persons who stand in a particular relation to one another.” 13 The law of intentional torts, specifically, “is concerned with the duty to abstain from causing a willful or intentional injury to others.” 14 If we think that the law of intentional torts, as created by courts and state laws, represents society’s ethical beliefs on how humans should interact with one another, then it is an appropriate place to begin in codifying basic human ethical principles into an autonomous machine. If we believe, for example, that it is unethical for a human to strike another human, then it is necessary that an autonomous machine be prevented from striking another human if we want that machine to model human ethics.

There are several challenges, however, to this approach. First, it would require “making legal terms such as ‘intent’ or ‘imminent’ or ‘distress’ computational.” 15 Algorithms would be needed that could “detect intent” and perceive “imminence or distressed emotional states.” 16 It would require “formalizing the legal concept of a ‘rational person,’” which is something upon which not even legal scholars agree. 17

Second, similar to the problems associated with developing a deontological framework for autonomous machines, it would require a significant amount of engineering work to explicitly codify all of the legal principles, and it would take the artificial agent a long time to search through all of the rules before it would be able to conclusively say that it is not violating one.18 An AV might not have the time to search through a large database of rules when it is forced to make split second decisions.

Third, this approach by itself does not account for the possibility of unethical laws. To understand the impact of unethical laws, we need not look any further than the legacy of World War II and the Nuremberg Trials in which Nazis were punished despite acting under the laws and requirements of their government. Following the Nuremberg Trials, courts in the United States have acknowledged that members of a community might be required “to violate domestic law in order to prevent . . . crimes against humanity.” 19 An ethical framework for autonomous machines based strictly on legal principles would not permit such deviations.

The ultimate answer, most likely, is that there is no single software panacea that can ensure AVs safely navigate every road encounter. Engineers will need to incorporate a number of decision-making frameworks in order to appropriately model human ethical decision making. They may need to incorporate legal principles as well as aspects of Scheutz’s two other decision-making approaches: modeling “human moral competence” in the machine” and implementing “one of the ethical theories proposed by philosophers (for example, virtue ethics, deontology, or consequentialism)” in the autonomous agent.20 The lives of cyclists depend on it.

 

More than fifty years after its conception, the Trolley Problem has experienced a renaissance with the advent of Autonomous Vehicles (“AV”). In the new iteration of the Trolley Problem, instead of a person pulling a switch, the decision maker in this case is a car being controlled by artificial intelligence. That AV is on a collision course with a group of people and the software must decide whether to alter course when the only available direction would result in an accident involving someone else.5 The facts of the situation are often changed to further complicate the decision. What if the one person is a little girl? What if the group of people are robbing a bank? These are not just interesting questions for philosophers to debate. How engineers choose to solve these problems has real world implications because it impacts how they might program the ethical decision making software of AVs.

Some people may dismiss the Trolley Problem as too implausible to have real world implications. An autonomous vehicle could stop, swerve in a different direction, or drive cautiously enough to avoid such predicaments. The trolley in the original problem was, as you remember, out of control. However, consider a world in which, instead of just one autonomous vehicle facing the trolley dilemma, there are millions of autonomous vehicles on the road every day encountering similar situations. Even if these vehicles are not making a binary choice between who lives or dies, the decisions that AVs make will shift the risk of collision between them and the vehicles and pedestrians around them. When thousands of these decisions are made every day by AVs, accidents will happen and people will be injured by no fault of their own.

Imagine a scenario in which an AV is driving down the road and must pass between a cyclist on its right and a large truck on its left. Just like a human operator, the machine must make a decision on how to pass between the two vehicles. If the AV elects to pass closer to the bicycle, then the cyclist assumes a greater risk of being injured in a collision with the AV. If the AV decides to move closer to the truck, then the AV assumes greater risk of being damaged or exposing its passengers to injury in a collision with the truck. 6 It is possible that the AV navigates this situation without any party being injured. However, it is also possible that when it shifts towards the cyclist, the cyclist has less room to maneuver and collides with a subsequent obstacle. The damage may be negligible if this situation were to occur infrequently; however, if there are thousands of AVs on the road interacting with thousands of cyclists, the encounters are likely to be more numerous and the injuries are likely to add up.

Humans, in the previous scenario, might make different decisions based upon their own moral foundations and perceptions of the situation. A person who places more value on personal safety is likely to drive farther from the truck and closer to the cyclist. A driver who values the safety of others is likely to move closer to the truck. As a result of the variation of moral foundations from human to human, the injuries will be somewhat split between cyclists and vehicle operators. Assume, however, that all AVs are programmed, for example, to place their own safety above all other principles. All AVs, therefore, would drive closer to the cyclist and shift the risk of the scenario upon him or her. This could result in a world in which cyclists are statistically more likely to be killed than owners of autonomous vehicles. Perhaps, they would even be five times more likely to be killed than owners of autonomous vehicles. On a much larger scale, we thus find ourselves back within the constraints of the Trolley Problem. And this time, the switch operator has decided to let the trolley kill the five pedestrians.

Disregarding actual physical injury, the simple shift of risk from the AV to those around it could be considered, by itself, an “injury-in-fact.” Tort law arose out of the principle that each human has a right to bodily integrity and is entitled to compensation when another person breaches a societal norm and violates that bodily integrity. 7 Recently, courts have recognized a right to emotional integrity as well and granted compensation to victims who found themselves within the “zone of danger” of a negligent act.8 Such a shift suggests that people not only have a right to not be physically injured, but a right to not be placed in a position where they fear bodily injury. Under this theory, an AV decision making framework that places greater risk of collision upon a specific subset of society, e.g. cyclists, injures the members of the subset of society by causing them to fear for their bodily integrity when they encounter autonomous vehicles on the road. If cyclists knew that AVs were programmed to drive closer to them, they would have good reason to fear the AVs driving past them.

What can engineers and computer scientists possibly do to help mitigate the ethical quandaries provided by the Trolley Problem? The answer, according to Tufts cognitive and computer scientist Matthias Scheutz, is to develop autonomous machines that are explicit ethical agents possessing the abilities to apply “explicit representations about ethical principles . . . in a variety of situations,” to “handle new situations not anticipated by their designers and make sensitive determinations about what should be done,” and to express their “reasoning in natural language.” 9 Explicit ethical agents can be contrasted with implicit ethical agents, which rely only upon “the interplay of existing algorithms in the architecture,” like preprogrammed obstacle avoidance, to make decisions. 10

The first approach proposed by Scheutz for developing an explicit ethical agent is to program legal principles into the software. 11 Autonomous vehicles already apply this approach, to an extent, because they are programmed to drive the speed limit, stay on the right side of the road, and stop at stop signs. Going beyond traffic violations, Scheutz suggests programming machines with the law of intentional torts which includes offenses like false imprisonment, battery, assault, and the aforementioned infliction of emotional distress. 12 The law of torts is a logical place to begin when preparing machines for human interactions because it penalizes the “breach of a duty” that is imposed “on persons who stand in a particular relation to one another.” 13 The law of intentional torts, specifically, “is concerned with the duty to abstain from causing a willful or intentional injury to others.” 14 If we think that the law of intentional torts, as created by courts and state laws, represents society’s ethical beliefs on how humans should interact with one another, then it is an appropriate place to begin in codifying basic human ethical principles into an autonomous machine. If we believe, for example, that it is unethical for a human to strike another human, then it is necessary that an autonomous machine be prevented from striking another human if we want that machine to model human ethics.

There are several challenges, however, to this approach. First, it would require “making legal terms such as ‘intent’ or ‘imminent’ or ‘distress’ computational.” 15 Algorithms would be needed that could “detect intent” and perceive “imminence or distressed emotional states.” 16 It would require “formalizing the legal concept of a ‘rational person,’” which is something upon which not even legal scholars agree. 17

Second, similar to the problems associated with developing a deontological framework for autonomous machines, it would require a significant amount of engineering work to explicitly codify all of the legal principles, and it would take the artificial agent a long time to search through all of the rules before it would be able to conclusively say that it is not violating one.18 An AV might not have the time to search through a large database of rules when it is forced to make split second decisions.

Third, this approach by itself does not account for the possibility of unethical laws. To understand the impact of unethical laws, we need not look any further than the legacy of World War II and the Nuremberg Trials in which Nazis were punished despite acting under the laws and requirements of their government. Following the Nuremberg Trials, courts in the United States have acknowledged that members of a community might be required “to violate domestic law in order to prevent . . . crimes against humanity.” 19 An ethical framework for autonomous machines based strictly on legal principles would not permit such deviations.

The ultimate answer, most likely, is that there is no single software panacea that can ensure AVs safely navigate every road encounter. Engineers will need to incorporate a number of decision-making frameworks in order to appropriately model human ethical decision making. They may need to incorporate legal principles as well as aspects of Scheutz’s two other decision-making approaches: modeling “human moral competence” in the machine” and implementing “one of the ethical theories proposed by philosophers (for example, virtue ethics, deontology, or consequentialism)” in the autonomous agent.20 The lives of cyclists depend on it.

 

More than fifty years after its conception, the Trolley Problem has experienced a renaissance with the advent of Autonomous Vehicles (“AV”). In the new iteration of the Trolley Problem, instead of a person pulling a switch, the decision maker in this case is a car being controlled by artificial intelligence. That AV is on a collision course with a group of people and the software must decide whether to alter course when the only available direction would result in an accident involving someone else.5 The facts of the situation are often changed to further complicate the decision. What if the one person is a little girl? What if the group of people are robbing a bank? These are not just interesting questions for philosophers to debate. How engineers choose to solve these problems has real world implications because it impacts how they might program the ethical decision making software of AVs.

Some people may dismiss the Trolley Problem as too implausible to have real world implications. An autonomous vehicle could stop, swerve in a different direction, or drive cautiously enough to avoid such predicaments. The trolley in the original problem was, as you remember, out of control. However, consider a world in which, instead of just one autonomous vehicle facing the trolley dilemma, there are millions of autonomous vehicles on the road every day encountering similar situations. Even if these vehicles are not making a binary choice between who lives or dies, the decisions that AVs make will shift the risk of collision between them and the vehicles and pedestrians around them. When thousands of these decisions are made every day by AVs, accidents will happen and people will be injured by no fault of their own.

Imagine a scenario in which an AV is driving down the road and must pass between a cyclist on its right and a large truck on its left. Just like a human operator, the machine must make a decision on how to pass between the two vehicles. If the AV elects to pass closer to the bicycle, then the cyclist assumes a greater risk of being injured in a collision with the AV. If the AV decides to move closer to the truck, then the AV assumes greater risk of being damaged or exposing its passengers to injury in a collision with the truck. 6 It is possible that the AV navigates this situation without any party being injured. However, it is also possible that when it shifts towards the cyclist, the cyclist has less room to maneuver and collides with a subsequent obstacle. The damage may be negligible if this situation were to occur infrequently; however, if there are thousands of AVs on the road interacting with thousands of cyclists, the encounters are likely to be more numerous and the injuries are likely to add up.

Humans, in the previous scenario, might make different decisions based upon their own moral foundations and perceptions of the situation. A person who places more value on personal safety is likely to drive farther from the truck and closer to the cyclist. A driver who values the safety of others is likely to move closer to the truck. As a result of the variation of moral foundations from human to human, the injuries will be somewhat split between cyclists and vehicle operators. Assume, however, that all AVs are programmed, for example, to place their own safety above all other principles. All AVs, therefore, would drive closer to the cyclist and shift the risk of the scenario upon him or her. This could result in a world in which cyclists are statistically more likely to be killed than owners of autonomous vehicles. Perhaps, they would even be five times more likely to be killed than owners of autonomous vehicles. On a much larger scale, we thus find ourselves back within the constraints of the Trolley Problem. And this time, the switch operator has decided to let the trolley kill the five pedestrians.

Disregarding actual physical injury, the simple shift of risk from the AV to those around it could be considered, by itself, an “injury-in-fact.” Tort law arose out of the principle that each human has a right to bodily integrity and is entitled to compensation when another person breaches a societal norm and violates that bodily integrity. 7 Recently, courts have recognized a right to emotional integrity as well and granted compensation to victims who found themselves within the “zone of danger” of a negligent act.8 Such a shift suggests that people not only have a right to not be physically injured, but a right to not be placed in a position where they fear bodily injury. Under this theory, an AV decision making framework that places greater risk of collision upon a specific subset of society, e.g. cyclists, injures the members of the subset of society by causing them to fear for their bodily integrity when they encounter autonomous vehicles on the road. If cyclists knew that AVs were programmed to drive closer to them, they would have good reason to fear the AVs driving past them.

What can engineers and computer scientists possibly do to help mitigate the ethical quandaries provided by the Trolley Problem? The answer, according to Tufts cognitive and computer scientist Matthias Scheutz, is to develop autonomous machines that are explicit ethical agents possessing the abilities to apply “explicit representations about ethical principles . . . in a variety of situations,” to “handle new situations not anticipated by their designers and make sensitive determinations about what should be done,” and to express their “reasoning in natural language.” 9 Explicit ethical agents can be contrasted with implicit ethical agents, which rely only upon “the interplay of existing algorithms in the architecture,” like preprogrammed obstacle avoidance, to make decisions. 10

The first approach proposed by Scheutz for developing an explicit ethical agent is to program legal principles into the software. 11 Autonomous vehicles already apply this approach, to an extent, because they are programmed to drive the speed limit, stay on the right side of the road, and stop at stop signs. Going beyond traffic violations, Scheutz suggests programming machines with the law of intentional torts which includes offenses like false imprisonment, battery, assault, and the aforementioned infliction of emotional distress. 12 The law of torts is a logical place to begin when preparing machines for human interactions because it penalizes the “breach of a duty” that is imposed “on persons who stand in a particular relation to one another.” 13 The law of intentional torts, specifically, “is concerned with the duty to abstain from causing a willful or intentional injury to others.” 14 If we think that the law of intentional torts, as created by courts and state laws, represents society’s ethical beliefs on how humans should interact with one another, then it is an appropriate place to begin in codifying basic human ethical principles into an autonomous machine. If we believe, for example, that it is unethical for a human to strike another human, then it is necessary that an autonomous machine be prevented from striking another human if we want that machine to model human ethics.

There are several challenges, however, to this approach. First, it would require “making legal terms such as ‘intent’ or ‘imminent’ or ‘distress’ computational.” 15 Algorithms would be needed that could “detect intent” and perceive “imminence or distressed emotional states.” 16 It would require “formalizing the legal concept of a ‘rational person,’” which is something upon which not even legal scholars agree. 17

Second, similar to the problems associated with developing a deontological framework for autonomous machines, it would require a significant amount of engineering work to explicitly codify all of the legal principles, and it would take the artificial agent a long time to search through all of the rules before it would be able to conclusively say that it is not violating one.18 An AV might not have the time to search through a large database of rules when it is forced to make split second decisions.

Third, this approach by itself does not account for the possibility of unethical laws. To understand the impact of unethical laws, we need not look any further than the legacy of World War II and the Nuremberg Trials in which Nazis were punished despite acting under the laws and requirements of their government. Following the Nuremberg Trials, courts in the United States have acknowledged that members of a community might be required “to violate domestic law in order to prevent . . . crimes against humanity.” 19 An ethical framework for autonomous machines based strictly on legal principles would not permit such deviations.

The ultimate answer, most likely, is that there is no single software panacea that can ensure AVs safely navigate every road encounter. Engineers will need to incorporate a number of decision-making frameworks in order to appropriately model human ethical decision making. They may need to incorporate legal principles as well as aspects of Scheutz’s two other decision-making approaches: modeling “human moral competence” in the machine” and implementing “one of the ethical theories proposed by philosophers (for example, virtue ethics, deontology, or consequentialism)” in the autonomous agent.20 The lives of cyclists depend on it.

 

Notre Dame Journal on Emerging Technologies ©2020  

Scroll to Top