Chad Lawhorn | 03/23/2022

Humans in the Loop:

The Ethics of Force and Fully Autonomous Weapon Systems

Introduction

Should lethal weapons systems be allowed to decide when and where to use force and who to use it against, or should we mandate that humans remain in the use-of-force decision chain? This paper aims to review some of the moral and ethical arguments for and against fully autonomous weapon systems, to gain a deeper understanding of a select group of perspectives on the ethical use of such systems.

Autonomous weapons systems are not new, nor is the debate surrounding their use; however, we are nearing a time when the reality of these systems becomes something more. The discussion has expanded to include questions about whether these systems should exist and what they should be capable of doing—should they be freed from human control? It has long been the case that even the most advanced autonomous weapons systems require some level of human interaction to use force, such as imposing engagement parameters and real-time remote control. However, it is likely an unavoidable reality that autonomous weapon systems will also include systems governed by artificial intelligence and entirely independent of human oversight.1 Fully autonomous weapon systems raise several moral and ethical questions; however, this paper will focus on the questions surrounding human oversight and the use of autonomous weapon systems. What makes human involvement when the use of force is being applied via autonomous systems more or less ethical than if autonomous weapon platforms were making these decisions on their own?2

If machines make forceful decisions, who will or can be held accountable when there are unintended consequences or intentional violations? If autonomous weapon systems are fully autonomous, how will the robot make decisions—based on preprogrammed parameters or artificial intelligence-based machine learning? These are significant questions that require thorough discussion. Other concerns revolve around target distinction, human rights, and dignity.2 This paper acknowledges the potential utility of autonomous weapon systems while highlighting the uncertainty surrounding their future autonomous capabilities and concerns regarding their ethical implications and capabilities. One of the strongest arguments presented by those advocating for increased autonomy is the technology’s potential to reduce psychological and physical harm. This is a valid point, and this paper recognizes the obligation to do the least harm.4 Before any discussion can be held, it is essential to understand these systems, at least in general terms.  

What Are Autonomous Weapon Systems

Autonomous weapon systems are lethal systems that can operate independently from human interaction.5 More relevant to this paper is the future of such weapon systems, which could be capable of separating the machine from the human. In the future, weapons platforms could observe, identify targets, decide justification, and use force independently without human oversight. This is a distinct possibility for the future of warfare—active research is being conducted to integrate those capabilities into autonomous weapon systems, which, for now, remain semi-autonomous.6 With their advent and early proliferation, the moral and ethical landscape surrounding their use can sometimes be complicated or overwhelming; however, we should not abandon that debate.

Although autonomous weapon systems already exist, a significant number of moral and ethical arguments are geared toward the future of the technology. This is because autonomous weapons technology has seemingly unlimited room for evolution. As these technologies advance, they will likely gain greater autonomy.7 The potential of full autonomy concerns many activists and policymakers. It motivates them to focus on constructing an ethical and moral framework, in the hope that we will never live in a world where weapons are completely independent of human operators. Opponents of fully autonomous weapon systems focus on issues such as accountability, programming limitations, proportionality, targeting distinction, and human dignity. The uncertainty of these topics informs their more prohibitional reasoning. Advocates for greater autonomy tend to consider harm reduction, utility, and the impact on future security if these systems are pursued.8 For the opposition, this paper will focus on a few key argument categories: (1) accountability and responsibility, and (2) human dignity and dehumanization. The research will focus on (1) efficiency, accuracy, and harm reduction for the proponents.

The Case Against Full Autonomy

Who is accountable for unintended consequences, or worse, intentional violations, if the weapon platforms themselves decide when to use force? Can the use of fully autonomous weapons be ethical if no one exists to assign blame to or hold to account? As we currently understand, autonomous weapon systems are often semi-autonomous. They are often remotely operated or exist as systems that receive preprogrammed use-of-force parameters and operate within a limited automated framework.9 If something goes wrong, we could blame several individuals, including commanding officers or operators, depending on the situation, as humans remain in the loop—here is a chain of responsibility. However, the question remains: who is accountable for errors in systems with no human operator, such as weapon systems, with their respective agencies?

It is no mystery that in warfare, crimes are committed, and errors occur. Even when humans are the ones who commit crimes, and it is known who the responsible parties might be, it can be challenging to identify who should be held accountable. It can be even more challenging to bring them to justice once identified. Accountability can be challenging to achieve; several factors complicate the process. For example, double standards, politics, willingness, capacity, the scale of crimes, lack of evidence, and lack of jurisdictional authority due to sovereignty.10

It is reasonable to assume that the accountability process will be further complicated by the absence of a human in the loop. Some argue that when weapon systems become fully autonomous, it creates a use-of-force environment where no one is culpable, either because no one is directly taking responsibility for these violations, or it is simply unknown who should be held accountable when a human is not in the loop.11 Some believe this is an egregious violation of warfare's moral and ethical norms. In their view, one of the principal requirements for the use of force to be morally and ethically correct is that there must be responsible parties.12 For the Campaign to Stop Killer Robots, moral responsibility and accountability are critical for ensuring human dignity when force is used. To have accountability, humans must be the entry making the decisions around said use of force.13 According to those in opposition to fully autonomous weapon systems, these so-called killer robots are not capable of moral and ethical reasoning. Thus, if and when they violate the rules and norms of the use of force, it is not feasible to hold them accountable because they are not capable of being moral agents.14

However, some supporters of the continued evolution of these weapon systems attempt to find humans to hold to account as a means to counter the argument of a "responsibility gap." A common blameworthy party would be the programmers of the fully autonomous weapon systems. These weapons are supposed to behave in a specific way, even if they make their own choices. It is hard to imagine that a military would buy a weapon, autonomous or not, where the description of the weapon's behavior is, "It's a bit of a wild card, unpredictable at best." No, the military and its soldiers want a reliable weapon that functions in specific ways for its intended purpose. There could be no trust in the weapon otherwise. Thus, the proponents reason that programmers could be held responsible for violations made by fully autonomous weapons because the weapon or weapon system did not function as intended. However, the opposition quickly addresses this point by pointing to the apparent liability protections any company developing and manufacturing these systems would maintain. Such as a simple disclaimer about programming limitations and the errors that could arise from such limitations. In other words, and by use at your own risk. Furthermore, if these systems are, in fact, fully autonomous, then they are likely governed by some artificial intelligence with the capacity to learn. If the machines are learning how long it takes for the programming of the weapon systems to become altered, they begin to operate at higher levels. Those opposed to full autonomy argue that these facts make it difficult to hold programmers liable for their actions.15 

What about the military personnel who ordered the use of such a system? Robert Sparrow asserts that fully autonomous weapon systems, which decide when and where to use force and who to target, remove any such responsibility from military personnel, as those human actors did not control the targeting or decision-making of the weapon—the system operated as intended and with its agency.16

The so-called “responsibility gap” or “accountability gap” is a complex issue that warrants further exploration. Not only do several moral and ethical questions arise, but there are also legal obstacles to meaningful accountability when there are no clear human actors in the decision-making chain involving the use of force.17 In addition to culpability for unintended consequences or intentional violations, there is the matter of human rights and dignity. What we seek accountability for is often the violation of humanity itself. 

Stop Killer Robots states that the development and advancement of lethal autonomous weapons create a world where the power of life and death decision-making moves from humans to robots. Targets are picked by nothing more than complex algorithms, and this process is void of human oversight. They argue that this dehumanizes human targets by reducing them to data points.18 If machines are deciding when and where to use force and who to target, since they are incapable of the same complexities of human moral and ethical thought, they are making these decisions based on algorithms and rendering humans into data points. Thus, the dehumanization of combatants and non-combatants alike. If combatants and non-combatants are dehumanized to data points, then what human dignity can exist? Those who are the recipients of the use of force by the machine were never considered or understood to be human by the machine. The system cannot understand humanness. More simply, it is argued that humans have the right to die at the hands of other humans as opposed to machines. As previously stated, machines are not capable of moral and ethical thought in the same way humans are, and fully autonomous weapons do not possess motives, intentions, or feelings; the targets they select are chosen solely because they fit a specific data set that informs the robot. The targets have been dehumanized. A level of emotional intelligence is required and expected in a use-of-force decision that cannot be simulated in any meaningful way—it is a level of thought that requires a human. Humans must be involved to ensure that operations are both ethically and morally sound. Machines cannot comprehend the complex emotional nuances involved in making morally and ethically sound judgments about the use of force.19

The Case for Full Autonomy

We build machines not only to do tasks we don’t want to do, but also to do tasks we can’t do or to do the ones we can more efficiently.20 Waging war places humans in challenging situations under consistent extreme stress. In such an environment and all others, humans make mistakes. With fully autonomous weapons, militaries will have more capabilities at their disposal and will be able to place personnel in less danger. Moreover, these systems will be faster, more accurate, and unimpacted by the stresses humans often succumb to. Without humans in the loop, military planners can utilize tools to enhance mission efficacy and duration. Weapon systems capable of operating without humans can perform in places for longer at greater efficiency than the human warfighters they could replace.21 These systems provide increased targeting accuracy and experience fewer mistakes. This means they can engage legitimate combatants more accurately, reducing the harm done to non-combatants in the same space as any potential combatants.22 Suppose we can reduce physical harm and mental health effects for humans by simply removing the human element from the use of force. In that case, it could be argued that these technologies are necessary as we are obligated to do the least amount of harm possible when using force. These systems could reduce human error and the harm it causes to combatants and non-combatants.23

We must pursue whatever technology enables us to do the most good while minimizing harm. As previously stated, fully autonomous weapon systems reduce harm to warfighters and non-combatants. They take the humans out of harm’s way, providing military planners with ways to fight wars without sending in human personnel to carry out use-of-force actions. However, harm is defined not only physically but also psychologically. Psychological harm can extend beyond the battlefield and affect passive observers of warfare. So, what if there were no more war stories, no more PTSD, no more military funerals, etc.? It can be reasoned that if humans are receiving less trauma due to fully autonomous weapon systems fighting wars, we are reducing the harmful psychological impact of warfare on society as a whole.24

Final Thoughts

What makes a human being in the loop more ethical than machines with full autonomy? While there are compelling arguments on both sides of this debate, the arguments presented by the opposition to fully autonomous weapons systems are more satisfying. However, this does not mean that this paper rejects the arguments of those in favor of increasing the autonomy of weapon systems. Several moral, ethical, and practical points regarding harm reduction will be further explored and discussed in this section. 

However, this paper does not address the realities and security concerns associated with not pursuing these technologies, as our enemies will undoubtedly be seeking these sorts of capabilities. This topic might have ethical questions. That set of issues warrants further research, particularly in terms of how they intersect with moral obligations and ethical practices. 

The arguments between the two sides are numerous, with many points and counterpoints. However, in such a limited space, only a few of these topics could be discussed; many of the arguments, however, appear to raise moral and ethical questions concerning humans, or the lack thereof, and the use of force.

This paper argues that one of the key elements in this conversation is that, while humans possess many notable flaws and weaknesses, our moral and ethical reasoning truly distinguishes our species from machines. Humans can navigate the critical nuances of human nature in ways that machines can never simulate.25 This claim rejects the notion that our programming capabilities will reach a level where robots can become moral agents in a meaningful sense. Some argue that we can build so-called “moral machines”—systems with artificial intelligence capable of simulating morals and ethics comparable to human reasoning.26 But the reality is that morals and ethics require emotional intelligence to construct, navigate, and understand when such things have been violated and why those violations matter. Emotional intelligence can communicate essential details about morals and ethics and inform situational reasoning. Machines are currently incapable of this level of thought, and there is plenty of doubt about future programming capabilities. There could be a situation where the autonomous capabilities far outpace any capabilities to program a moral and ethical reasoning system.27 

Now, aligning closer to those who advocate for greater autonomy, this paper finds that researching and improving our understanding of these technologies would be morally and ethically permissible if only to create countermeasures. It is reasonable to assume that even if there is an internationally agreed-upon set of moral and ethical norms governing the level of autonomy of weapon systems, some entities will have no interest in following said norms and might deploy fully autonomous weapons if that technology is available.28

Some express confidence in our ability to produce autonomous weapons systems that can replace human moral and ethical judgment on and above the battlefield through programming safeguards or restrictions, even if they cannot possess proper ethical and moral reasoning.29 Others further suggest that we can create successful “moral machines.” In the future, we will have artificial intelligence capable of not only simulating human emotions but also developing the ability to engage in complex moral and ethical reasoning independently, when we cannot program this capability.30 However, if robots cannot have true moral and ethical reasoning as humans do, this paper argues that they cannot engage in force with total autonomy. If autonomous weapon systems lack the programming necessary to make complex moral and ethical decisions as humans, they cannot be held accountable.31 Without this moral and ethical reasoning level, weapon systems cannot be moral agents. They cannot understand why something is right or wrong, nor can they comprehend any judgment levied against them for an ethical or legal violation.32

There can be no accountability when machines are used as arbitrators. If there can be no accountability, there can be no justice for transgressions against our humanity when the violations come from only machines.33 Connecting accountability and responsibility to the use of force becomes easier with humans in the loop.34 To uphold existing moral and ethical norms, there must be a system of accountability to ensure that war remains, as Walzer would say, “a rules-governed activity.”35 The responsibility gap alone is enough to question the moral and ethical use of fully autonomous weapons. However, perhaps we could mitigate the impact of, or even overlook, the responsibility gap if we could drastically reduce any poetic harm resulting from the use of force.  

It could be argued that harm would be reduced with greater efficiency and targeting accuracy. We must strive to do the least harm possible in any situation. This is a compelling point. If removing humans from the loop means increased capabilities in harm reduction, should that not ultimately be our priority? Many advocates for greater autonomy argue that these systems will make conducting warfare safer and more efficient, reducing harm to combatants and non-combatants alike, possibly even many factors that lead to unintended consequences. These points should be recognized and deserve further research. However, this paper aligns with some thoughts from the opposition that these issues could also be addressed through more accurate munitions and targeting systems, as well as existing semi-autonomous weapon systems, while avoiding more complex moral and ethical questions surrounding the idea of weapon systems with total autonomy.

Furthermore, does efficiency necessarily mean less harm will be done? Efficiency could also manifest as shorter wars characterized by greater violence, especially if weapons systems can learn from combat experience or, perhaps worse, the darkest forms of human behavior. The fully autonomous machines themselves could decide that the most efficient course of action is greater brutality.36

As mentioned previously, harm is not only physical but also psychological. It is often argued that autonomous weapon systems reduce harm by removing or reducing the risk of physical harm to combatants and non-combatants, as well as helping to eliminate psychological distress. Michael Horowitz points out that drone pilots also suffer from PTSD. It could be argued that, for this reason, humans should be removed from the loop altogether, thereby reducing the harm done to warfighters and any unintended consequences that may arise from their psychological distress during the use of force.37

While this paper acknowledges the need to minimize harm at all costs, it does not find that the research establishes potential harm reduction through reliance on fully autonomous weapon systems as a sufficient reason to abandon a stringent regulatory approach to these systems. Regarding fully autonomous technologies, the ambiguity of future capabilities' uses and programming limitations leaves many moral and ethical questions unanswered. This paper argues that the potential risks to human rights and any moral and ethical norms system greatly outweigh the possible harm reduction. The harmful impacts on combatants and non-combatants are of great moral and ethical concern, and we must find ways to reduce and eliminate those harms. However, questions remain about whether more autonomy for weapon systems is the answer.  

We must take this time to understand these weapons and, more specifically, how they change the nature of warfare and what moral and ethical limits we should or should not place on their use. For now, the only way to be confident in our ability to avoid any potential exponential disaster in a world of weapon systems with complete autonomy is to ensure that there can be no such world by keeping humans in the decision-making chain for the use of force. With humans in the loop, we can ensure that accountability is pursued and maintain trust that justice will be served when our humanity is attacked.

Future research should focus more narrowly. An interesting related topic might be whether autonomous weapon systems can engage in meaningful moral and ethical reasoning; how might we decide to treat them? Could they be treated as humans according to moral and ethical norms and rules for combatants and non-combatants? If so, how could that reshape some of the issues addressed in this paper, precisely the responsibility gap?

  1. Horowitz, Michael C. “The Ethics & Morality of Robotic Warfare: Assessing the Debate over Autonomous Weapons.” Daedalus 145, no. 4 (September 2016): 25–36. https://doi.org/10.1162/DAED_a_00409; Leveringhaus, Alex. “What’s So Bad About Killer Robots?” Journal of Applied Philosophy 35, no. 2 (May 2018): 341–58. https://doi.org/10.1111/japp.12200; Lucas, Nathan J. “Lethal Autonomous Weapon Systems: Issues for Congress,” n.d., 31.

  2. Horowitz, Michael C. “The Ethics & Morality of Robotic Warfare: Assessing the Debate over Autonomous Weapons.” Daedalus 145, no. 4 (September 2016): 25–36. https://doi.org/10.1162/DAED_a_00409; Leveringhaus, Alex. “What’s So Bad About Killer Robots?” Journal of Applied Philosophy 35, no. 2 (May 2018): 341–58. https://doi.org/10.1111/japp.12200; Lucas, Nathan J. “Lethal Autonomous Weapon Systems: Issues for Congress,” n.d., 31; Sparrow, Robert. “19 Can Machines Be People? Reflections on the Turing Triage Test,” n.d., 16; Sparrow, Robert. “Killer Robots.” Journal of Applied Philosophy 24, no. 1 (February 2007): 62–77. https://doi.org/10.1111/j.1468-5930.2007.00346.x; Sparrow, Robert.  “Robots and Respect: Assessing the Case Against Autonomous Weapon Systems.” Ethics & International Affairs 30, no. 1 (2016): 93–116. https://doi.org/10.1017/S0892679415000647; Sparrow, Robert.  “Why Machines Cannot Be Moral.” AI & SOCIETY 36, no. 3 (September 2021): 685–93. https://doi.org/10.1007/s00146-020-01132-6.

  3. Horowitz, Michael C. “The Ethics & Morality of Robotic Warfare: Assessing the Debate over Autonomous Weapons.” Daedalus 145, no. 4 (September 2016): 25–36. https://doi.org/10.1162/DAED_a_00409; Lucas, Nathan J. “Lethal Autonomous Weapon Systems: Issues for Congress,” n.d., 31.

  4. Leveringhaus, Alex. “What’s So Bad About Killer Robots?” Journal of Applied Philosophy 35, no. 2 (May 2018): 341–58. https://doi.org/10.1111/japp.12200.

  5. Horowitz, Michael C. “The Ethics & Morality of Robotic Warfare: Assessing the Debate over Autonomous Weapons.” Daedalus 145, no. 4 (September 2016): 25–36. https://doi.org/10.1162/DAED_a_00409; Leveringhaus, Alex. “What’s So Bad About Killer Robots?” Journal of Applied Philosophy 35, no. 2 (May 2018): 341–58. https://doi.org/10.1111/japp.12200; Lucas, Nathan J. “Lethal Autonomous Weapon Systems: Issues for Congress,” n.d., 31; Smith, Patrick Taylor. “Just Research into Killer Robots.” Ethics and Information Technology 21, no. 4 (December 2019): 281–93. https://doi.org/10.1007/s10676-018-9472-6; Sparrow, Robert. “Killer Robots.” Journal of Applied Philosophy 24, no. 1 (February 2007): 62–77. https://doi.org/10.1111/j.1468-5930.2007.00346.x.

  6. Michael C. Horowitz, “The Ethics & Morality of Robotic Warfare: Assessing the Debate over Autonomous Weapons,” Daedalus 145, no. 4 (September 2016): 25–36, https://doi.org/10.1162/DAED_a_00409; Alex Leveringhaus, “What’s So Bad About Killer Robots?,” Journal of Applied Philosophy 35, no. 2 (May 2018): 341–58, https://doi.org/10.1111/japp.12200; Nathan J Lucas, “Lethal Autonomous Weapon Systems: Issues for Congress,” n.d., 31; Patrick Taylor Smith, “Just Research into Killer Robots,” Ethics and Information Technology 21, no. 4 (December 2019): 281–93, https://doi.org/10.1007/s10676-018-9472-6; Robert Sparrow, “Killer Robots,” Journal of Applied Philosophy 24, no. 1 (February 2007): 62–77, https://doi.org/10.1111/j.1468-5930.2007.00346.x.

  7. Krishnan Armin, “Ch.6 Dangerous Futures and Arms Control.,” in Killer Robots: Legality and Ethicality of Autonomous Weapons. (London: Taylor & Francis Group, 2009), 145–65, http://ebookcentral.proquest.com/lib/gmu/detail.action?docID=5208002.; Alex Leveringhaus, “What’s So Bad About Killer Robots?,” Journal of Applied Philosophy 35, no. 2 (May 2018): 341–58, https://doi.org/10.1111/japp.12200; Heather Roff, “To Ban or Regulate Autonomous Weapons: A US Response,” Bulletin of the Atomic Scientists 72, no. 2 (March 3, 2016): 122–24, https://doi.org/10.1080/00963402.2016.1145920; Robert Sparrow, “Killer Robots,” Journal of Applied Philosophy 24, no. 1 (February 2007): 62–77, https://doi.org/10.1111/j.1468-5930.2007.00346.x; Richard Stone, “Scientists Campaign Against Killer Robots,” Science 342, no. 6165 (December 20, 2013): 1428–29, https://doi.org/10.1126/science.342.6165.1428.

  8. Bartek Chomanski, “Should Moral Machines Be Banned? A Commentary on van Wynsberghe and Robbins ‘Critiquing the Reasons for Making Artificial Moral Agents,’” Science and Engineering Ethics 26, no. 6 (December 2020): 3469–81, https://doi.org/10.1007/s11948-020-00255-9; Paul Formosa and Malcolm Ryan, “Making Moral Machines: Why We Need Artificial Moral Agents,” AI & SOCIETY 36, no. 3 (September 2021): 839–51, https://doi.org/10.1007/s00146-020-01089-6; Alex Leveringhaus, “What’s So Bad About Killer Robots?,” Journal of Applied Philosophy 35, no. 2 (May 2018): 341–58, https://doi.org/10.1111/japp.12200; Nathan J Lucas, “Lethal Autonomous Weapon Systems: Issues for Congress,” n.d., 31; Andreas Matthias, “The Responsibility Gap: Ascribing Responsibility for the Actions of Learning Automata,” Ethics and Information Technology 6, no. 3 (2004): 175–83, https://doi.org/10.1007/s10676-004-3422-1; Luís Moniz Pereira and António Barata Lopes, Machine Ethics: From Machine Morals to the Machinery of Morality, vol. 53, Studies in Applied Philosophy, Epistemology and Rational Ethics (Cham: Springer International Publishing, 2020), https://doi.org/10.1007/978-3-030-39630-5; L. Righetti et al., “Lethal Autonomous Weapon Systems [Ethical, Legal, and Societal Issues],” IEEE Robotics & Automation Magazine 25, no. 1 (March 2018): 123–26, https://doi.org/10.1109/MRA.2017.2787267; Heather Roff, “To Ban or Regulate Autonomous Weapons: A US Response,” Bulletin of the Atomic Scientists 72, no. 2 (March 3, 2016): 122–24, https://doi.org/10.1080/00963402.2016.1145920; Heather M. Roff, “The Strategic Robot Problem: Lethal Autonomous Weapons in War,” Journal of Military Ethics 13, no. 3 (July 3, 2014): 211–27, https://doi.org/10.1080/15027570.2014.975010; Robert Sparrow, “Killer Robots,” Journal of Applied Philosophy 24, no. 1 (February 2007): 62–77, https://doi.org/10.1111/j.1468-5930.2007.00346.x; Robert Sparrow, “Robots and Respect: Assessing the Case Against Autonomous Weapon Systems,” Ethics & International Affairs 30, no. 1 (2016): 93–116, https://doi.org/10.1017/S0892679415000647; Robert Sparrow, “Why Machines Cannot Be Moral,” AI & SOCIETY 36, no. 3 (September 2021): 685–93, https://doi.org/10.1007/s00146-020-01132-6; Richard Stone, “Scientists Campaign Against Killer Robots,” Science 342, no. 6165 (December 20, 2013): 1428–29, https://doi.org/10.1126/science.342.6165.1428; Paul Scharre: Army of None: Autonomous Weapons and the Future of War, 2018, https://www.youtube.com/watch?v=Z9hVOCUUBvM; “International Committee for Robot Arms Control,” n.d., www.icrac.net/about-icrac/; “Remarks at ‘Web Summit’ | United Nations Secretary-General,” accessed May 1, 2022, https://www.un.org/sg/en/content/sg/speeches/2018-11-05/remarks-web-summit; “Stop Killer Robots,” accessed May 1, 2022, https://www.stopkillerrobots.org/stop-killer-robots/; Pranshu Verma, “The Military Wants ‘Robot Ships’ to Replace Sailors in Battle.,” The Washington Post, April 14, 2022, https://www.washingtonpost.com/technology/2022/04/14/navy-robot-ships/.

  9. Paul Formosa and Malcolm Ryan, “Making Moral Machines: Why We Need Artificial Moral Agents,” AI & SOCIETY 36, no. 3 (September 2021): 839–51, https://doi.org/10.1007/s00146-020-01089-6; Michael C. Horowitz, “The Ethics & Morality of Robotic Warfare: Assessing the Debate over Autonomous Weapons,” Daedalus 145, no. 4 (September 2016): 25–36, https://doi.org/10.1162/DAED_a_00409; Alex Leveringhaus, “What’s So Bad About Killer Robots?,” Journal of Applied Philosophy 35, no. 2 (May 2018): 341–58, https://doi.org/10.1111/japp.12200; Heather Roff, “To Ban or Regulate Autonomous Weapons: A US Response,” Bulletin of the Atomic Scientists 72, no. 2 (March 3, 2016): 122–24, https://doi.org/10.1080/00963402.2016.1145920; Robert Sparrow, “Killer Robots,” Journal of Applied Philosophy 24, no. 1 (February 2007): 62–77, https://doi.org/10.1111/j.1468-5930.2007.00346.x; Richard Stone, “Scientists Campaign Against Killer Robots,” Science 342, no. 6165 (December 20, 2013): 1428–29, https://doi.org/10.1126/science.342.6165.1428.

  10. Thomas Obel Hansen, “Opportunities and Challenges Seeking Accountability for War Crimes in Palestine under the International Criminal Court’s Complementarity Regime,” SSRN Electronic Journal, 2018, https://doi.org/10.2139/ssrn.3250325; Nathan J Lucas, “Lethal Autonomous Weapon Systems: Issues for Congress,” n.d., 31; Andreas Matthias, “The Responsibility Gap: Ascribing Responsibility for the Actions of Learning Automata,” Ethics and Information Technology 6, no. 3 (2004): 175–83, https://doi.org/10.1007/s10676-004-3422-1; Heather Roff, “To Ban or Regulate Autonomous Weapons: A US Response,” Bulletin of the Atomic Scientists 72, no. 2 (March 3, 2016): 122–24, https://doi.org/10.1080/00963402.2016.1145920; Jamie Rowen, “The Challenge of Criminal Accountability for Atrocious Policies,” Cato Unbound, September 20, 2019, https://www.cato-unbound.org/2019/09/20/jamie-rowen/challenge-criminal-accountability-atrocious-policies/; Lauren Sanders, “Accountability and Ukraine: Hurdles to Prosecuting War Crimes and Aggression,” Lieber Institute West Point, March 9, 2022, https://lieber.westpoint.edu/accountability-ukraine-hurdles-prosecuting-war-crimes-aggression/; Robert Sparrow, “Killer Robots,” Journal of Applied Philosophy 24, no. 1 (February 2007): 62–77, https://doi.org/10.1111/j.1468-5930.2007.00346.x; Robert Sparrow, “Robots and Respect: Assessing the Case Against Autonomous Weapon Systems,” Ethics & International Affairs 30, no. 1 (2016): 93–116, https://doi.org/10.1017/S0892679415000647; Robert Sparrow, “Why Machines Cannot Be Moral,” AI & SOCIETY 36, no. 3 (September 2021): 685–93, https://doi.org/10.1007/s00146-020-01132-6; Richard Stone, “Scientists Campaign Against Killer Robots,” Science 342, no. 6165 (December 20, 2013): 1428–29, https://doi.org/10.1126/science.342.6165.1428; “Stop Killer Robots,” accessed May 1, 2022, https://www.stopkillerrobots.org/stop-killer-robots/.

  11. Krishnan Armin, “Dangerous Futures and Arms Control.,” in Killer Robots: Legality and Ethicality of Autonomous Weapons. (London: Taylor & Francis Group, 2009), 145–65, http://ebookcentral.proquest.com/lib/gmu/detail.action?docID=5208002.; Krishnan Armin, “Ethical Considerations.,” in Killer Robots: Legality and Ethicality of Autonomous Weapons. (London: Taylor & Francis Group, 2009), 118–43, http://ebookcentral.proquest.com/lib/gmu/detail.action?docID=5208002.; Michael C. Horowitz, “The Ethics & Morality of Robotic Warfare: Assessing the Debate over Autonomous Weapons,” Daedalus 145, no. 4 (September 2016): 25–36, https://doi.org/10.1162/DAED_a_00409; Alex Leveringhaus, “What’s So Bad About Killer Robots?,” Journal of Applied Philosophy 35, no. 2 (May 2018): 341–58, https://doi.org/10.1111/japp.12200; Nathan J Lucas, “Lethal Autonomous Weapon Systems: Issues for Congress,” n.d., 31; Andreas Matthias, “The Responsibility Gap: Ascribing Responsibility for the Actions of Learning Automata,” Ethics and Information Technology 6, no. 3 (2004): 175–83, https://doi.org/10.1007/s10676-004-3422-1; L. Righetti et al., “Lethal Autonomous Weapon Systems [Ethical, Legal, and Societal Issues],” IEEE Robotics & Automation Magazine 25, no. 1 (March 2018): 123–26, https://doi.org/10.1109/MRA.2017.2787267; Heather Roff, “To Ban or Regulate Autonomous Weapons: A US Response,” Bulletin of the Atomic Scientists 72, no. 2 (March 3, 2016): 122–24, https://doi.org/10.1080/00963402.2016.1145920; Robert Sparrow, “Killer Robots,” Journal of Applied Philosophy 24, no. 1 (February 2007): 62–77, https://doi.org/10.1111/j.1468-5930.2007.00346.x; Robert Sparrow, “Robots and Respect: Assessing the Case Against Autonomous Weapon Systems,” Ethics & International Affairs 30, no. 1 (2016): 93–116, https://doi.org/10.1017/S0892679415000647; Robert Sparrow, “Why Machines Cannot Be Moral,” AI & SOCIETY 36, no. 3 (September 2021): 685–93, https://doi.org/10.1007/s00146-020-01132-6; “International Committee for Robot Arms Control,” n.d., www.icrac.net/about-icrac/; “Stop Killer Robots,” accessed May 1, 2022, https://www.stopkillerrobots.org/stop-killer-robots/.

  12. Robert Sparrow, “Killer Robots,” Journal of Applied Philosophy 24, no. 1 (February 2007): 62–77, https://doi.org/10.1111/j.1468-5930.2007.00346.x.

  13. “Stop Killer Robots,” accessed April 13, 2022, https://www.stopkillerrobots.org/stop-killer-robots/.

  14. Nathan J Lucas, “Lethal Autonomous Weapon Systems: Issues for Congress,” n.d., 31; Andreas Matthias, “The Responsibility Gap: Ascribing Responsibility for the Actions of Learning Automata,” Ethics and Information Technology 6, no. 3 (2004): 175–83, https://doi.org/10.1007/s10676-004-3422-1; Robert Sparrow, “Killer Robots,” Journal of Applied Philosophy 24, no. 1 (February 2007): 62–77, https://doi.org/10.1111/j.1468-5930.2007.00346.x; Robert Sparrow, “Can Machines Be People? Reflections on the Turing Triage Test.,” in Robot Ethics: The Ethical and Social Implications of Robotics. (Cambridge: MIT Press, 2011), 16; Robert Sparrow, “Robots and Respect: Assessing the Case Against Autonomous Weapon Systems,” Ethics & International Affairs 30, no. 1 (2016): 93–116, https://doi.org/10.1017/S0892679415000647; Robert Sparrow, “Why Machines Cannot Be Moral,” AI & SOCIETY 36, no. 3 (September 2021): 685–93, https://doi.org/10.1007/s00146-020-01132-6; “Stop Killer Robots,” accessed May 1, 2022, https://www.stopkillerrobots.org/stop-killer-robots/.

  15. Krishnan Armin, “Dangerous Futures and Arms Control.,” in Killer Robots: Legality and Ethicality of Autonomous Weapons. (London: Taylor & Francis Group, 2009), 145–65, http://ebookcentral.proquest.com/lib/gmu/detail.action?docID=5208002.; Krishnan Armin, “Ethical Considerations.,” in Killer Robots: Legality and Ethicality of Autonomous Weapons. (London: Taylor & Francis Group, 2009), 118–43, http://ebookcentral.proquest.com/lib/gmu/detail.action?docID=5208002.; Nathan J Lucas, “Lethal Autonomous Weapon Systems: Issues for Congress,” n.d., 31; Heather M. Roff and David Danks, “‘Trust but Verify’: The Difficulty of Trusting Autonomous Weapons Systems,” Journal of Military Ethics 17, no. 1 (January 2, 2018): 2–20, https://doi.org/10.1080/15027570.2018.1481907; Robert Sparrow, “Killer Robots,” Journal of Applied Philosophy 24, no. 1 (February 2007): 62–77, https://doi.org/10.1111/j.1468-5930.2007.00346.x; Robert Sparrow, “Can Machines Be People? Reflections on the Turing Triage Test.,” in Robot Ethics: The Ethical and Social Implications of Robotics. (Cambridge: MIT Press, 2011), 16; Robert Sparrow, “Robots and Respect: Assessing the Case Against Autonomous Weapon Systems,” Ethics & International Affairs 30, no. 1 (2016): 93–116, https://doi.org/10.1017/S0892679415000647; Robert Sparrow, “Why Machines Cannot Be Moral,” AI & SOCIETY 36, no. 3 (September 2021): 685–93, https://doi.org/10.1007/s00146-020-01132-6; Richard Stone, “Scientists Campaign Against Killer Robots,” Science 342, no. 6165 (December 20, 2013): 1428–29, https://doi.org/10.1126/science.342.6165.1428; “International Committee for Robot Arms Control,” n.d., www.icrac.net/about-icrac/; “Stop Killer Robots,” accessed May 1, 2022, https://www.stopkillerrobots.org/stop-killer-robots/.

  16. Robert Sparrow, “Killer Robots,” Journal of Applied Philosophy 24, no. 1 (February 2007): 62–77, https://doi.org/10.1111/j.1468-5930.2007.00346.x.

  17. Colin Allen and Wendell Wallach, “Moral Machines: Contradiction in Terms or Abdication of Human Responsibility?,” in Robot Ethics: The Ethical and Social Implications of Robotics. (Cambridge: MIT Press, 2011), 55–68; Michael C. Horowitz, “The Ethics & Morality of Robotic Warfare: Assessing the Debate over Autonomous Weapons,” Daedalus 145, no. 4 (September 2016): 25–36, https://doi.org/10.1162/DAED_a_00409; Nathan J Lucas, “Lethal Autonomous Weapon Systems: Issues for Congress,” n.d., 31; Andreas Matthias, “The Responsibility Gap: Ascribing Responsibility for the Actions of Learning Automata,” Ethics and Information Technology 6, no. 3 (2004): 175–83, https://doi.org/10.1007/s10676-004-3422-1; L. Righetti et al., “Lethal Autonomous Weapon Systems [Ethical, Legal, and Societal Issues],” IEEE Robotics & Automation Magazine 25, no. 1 (March 2018): 123–26, https://doi.org/10.1109/MRA.2017.2787267; Heather Roff, “To Ban or Regulate Autonomous Weapons: A US Response,” Bulletin of the Atomic Scientists 72, no. 2 (March 3, 2016): 122–24, https://doi.org/10.1080/00963402.2016.1145920; Robert Sparrow, “Killer Robots,” Journal of Applied Philosophy 24, no. 1 (February 2007): 62–77, https://doi.org/10.1111/j.1468-5930.2007.00346.x; Robert Sparrow, “Robots and Respect: Assessing the Case Against Autonomous Weapon Systems,” Ethics & International Affairs 30, no. 1 (2016): 93–116, https://doi.org/10.1017/S0892679415000647; Robert Sparrow, “Why Machines Cannot Be Moral,” AI & SOCIETY 36, no. 3 (September 2021): 685–93, https://doi.org/10.1007/s00146-020-01132-6.

  18. “Stop Killer Robots,” accessed May 1, 2022, https://www.stopkillerrobots.org/stop-killer-robots/.

  19. Michael C. Horowitz, “The Ethics & Morality of Robotic Warfare: Assessing the Debate over Autonomous Weapons,” Daedalus 145, no. 4 (September 2016): 25–36, https://doi.org/10.1162/DAED_a_00409; Robert Sparrow, “Killer Robots,” Journal of Applied Philosophy 24, no. 1 (February 2007): 62–77, https://doi.org/10.1111/j.1468-5930.2007.00346.x; Robert Sparrow, “Why Machines Cannot Be Moral,” AI & SOCIETY 36, no. 3 (September 2021): 685–93, https://doi.org/10.1007/s00146-020-01132-6; Richard Stone, “Scientists Campaign Against Killer Robots,” Science 342, no. 6165 (December 20, 2013): 1428–29, https://doi.org/10.1126/science.342.6165.1428; Paul Scharre: Army of None: Autonomous Weapons and the Future of War, 2018, https://www.youtube.com/watch?v=Z9hVOCUUBvM; “Stop Killer Robots,” accessed May 1, 2022, https://www.stopkillerrobots.org/stop-killer-robots/.

  20. Krishnan Armin, “Ethical Considerations.,” in Killer Robots: Legality and Ethicality of Autonomous Weapons. (London: Taylor & Francis Group, 2009), 118–43, http://ebookcentral.proquest.com/lib/gmu/detail.action?docID=5208002.

  21. Krishnan Armin, “Dangerous Futures and Arms Control.,” in Killer Robots: Legality and Ethicality of Autonomous Weapons. (London: Taylor & Francis Group, 2009), 145–65, http://ebookcentral.proquest.com/lib/gmu/detail.action?docID=5208002.; Krishnan Armin, “Ethical Considerations.,” in Killer Robots: Legality and Ethicality of Autonomous Weapons. (London: Taylor & Francis Group, 2009), 118–43, http://ebookcentral.proquest.com/lib/gmu/detail.action?docID=5208002.; Alex Leveringhaus, “What’s So Bad About Killer Robots?,” Journal of Applied Philosophy 35, no. 2 (May 2018): 341–58, https://doi.org/10.1111/japp.12200; Pranshu Verma, “The Military Wants ‘Robot Ships’ to Replace Sailors in Battle.,” The Washington Post, April 14, 2022, https://www.washingtonpost.com/technology/2022/04/14/navy-robot-ships/.

  22. Krishnan Armin, “Dangerous Futures and Arms Control.,” in Killer Robots: Legality and Ethicality of Autonomous Weapons. (London: Taylor & Francis Group, 2009), 145–65, http://ebookcentral.proquest.com/lib/gmu/detail.action?docID=5208002.

  23. Alex Leveringhaus, “What’s So Bad About Killer Robots?,” Journal of Applied Philosophy 35, no. 2 (May 2018): 341–58, https://doi.org/10.1111/japp.12200.

  24. Krishnan Armin, “Dangerous Futures and Arms Control.,” in Killer Robots: Legality and Ethicality of Autonomous Weapons. (London: Taylor & Francis Group, 2009), 145–65, http://ebookcentral.proquest.com/lib/gmu/detail.action?docID=5208002.; Krishnan Armin, “Ethical Considerations.,” in Killer Robots: Legality and Ethicality of Autonomous Weapons. (London: Taylor & Francis Group, 2009), 118–43, http://ebookcentral.proquest.com/lib/gmu/detail.action?docID=5208002.; Alex Leveringhaus, “What’s So Bad About Killer Robots?,” Journal of Applied Philosophy 35, no. 2 (May 2018): 341–58, https://doi.org/10.1111/japp.12200.

  25. Robert Sparrow, “Killer Robots,” Journal of Applied Philosophy 24, no. 1 (February 2007): 62–77, https://doi.org/10.1111/j.1468-5930.2007.00346.x; “Stop Killer Robots,” accessed May 1, 2022, https://www.stopkillerrobots.org/stop-killer-robots/.

  26. Colin Allen and Wendell Wallach, “Moral Machines: Contradiction in Terms or Abdication of Human Responsibility?,” in Robot Ethics: The Ethical and Social Implications of Robotics. (Cambridge: MIT Press, 2011), 55–68; Bartek Chomanski, “Should Moral Machines Be Banned? A Commentary on van Wynsberghe and Robbins ‘Critiquing the Reasons for Making Artificial Moral Agents,’” Science and Engineering Ethics 26, no. 6 (December 2020): 3469–81, https://doi.org/10.1007/s11948-020-00255-9; Paul Formosa and Malcolm Ryan, “Making Moral Machines: Why We Need Artificial Moral Agents,” AI & SOCIETY 36, no. 3 (September 2021): 839–51, https://doi.org/10.1007/s00146-020-01089-6; Andreas Matthias, “The Responsibility Gap: Ascribing Responsibility for the Actions of Learning Automata,” Ethics and Information Technology 6, no. 3 (2004): 175–83, https://doi.org/10.1007/s10676-004-3422-1; Luís Moniz Pereira and António Barata Lopes, Machine Ethics: From Machine Morals to the Machinery of Morality, vol. 53, Studies in Applied Philosophy, Epistemology and Rational Ethics (Cham: Springer International Publishing, 2020), https://doi.org/10.1007/978-3-030-39630-5; Patrick Taylor Smith, “Just Research into Killer Robots,” Ethics and Information Technology 21, no. 4 (December 2019): 281–93, https://doi.org/10.1007/s10676-018-9472-6; Robert Sparrow, “Killer Robots,” Journal of Applied Philosophy 24, no. 1 (February 2007): 62–77, https://doi.org/10.1111/j.1468-5930.2007.00346.x; Robert Sparrow, “Can Machines Be People? Reflections on the Turing Triage Test.,” in Robot Ethics: The Ethical and Social Implications of Robotics. (Cambridge: MIT Press, 2011), 16; Robert Sparrow, “Why Machines Cannot Be Moral,” AI & SOCIETY 36, no. 3 (September 2021): 685–93, https://doi.org/10.1007/s00146-020-01132-6; Richard Stone, “Scientists Campaign Against Killer Robots,” Science 342, no. 6165 (December 20, 2013): 1428–29, https://doi.org/10.1126/science.342.6165.1428.

  27. Robert Sparrow, “Killer Robots,” Journal of Applied Philosophy 24, no. 1 (February 2007): 62–77, https://doi.org/10.1111/j.1468-5930.2007.00346.x; Robert Sparrow, “Can Machines Be People? Reflections on the Turing Triage Test.,” in Robot Ethics: The Ethical and Social Implications of Robotics. (Cambridge: MIT Press, 2011), 16; Robert Sparrow, “Why Machines Cannot Be Moral,” AI & SOCIETY 36, no. 3 (September 2021): 685–93, https://doi.org/10.1007/s00146-020-01132-6; “Stop Killer Robots,” accessed May 1, 2022, https://www.stopkillerrobots.org/stop-killer-robots/.

  28. Patrick Taylor Smith, “Just Research into Killer Robots,” Ethics and Information Technology 21, no. 4 (December 2019): 281–93, https://doi.org/10.1007/s10676-018-9472-6.

  29. Heather M. Roff, “The Strategic Robot Problem: Lethal Autonomous Weapons in War,” Journal of Military Ethics 13, no. 3 (July 3, 2014): 211–27, https://doi.org/10.1080/15027570.2014.975010.

  30. Paul Formosa and Malcolm Ryan, “Making Moral Machines: Why We Need Artificial Moral Agents,” AI & SOCIETY 36, no. 3 (September 2021): 839–51, https://doi.org/10.1007/s00146-020-01089-6; Robert Sparrow, “Killer Robots,” Journal of Applied Philosophy 24, no. 1 (February 2007): 62–77, https://doi.org/10.1111/j.1468-5930.2007.00346.x.

  31. Paul Formosa and Malcolm Ryan, “Making Moral Machines: Why We Need Artificial Moral Agents,” AI & SOCIETY 36, no. 3 (September 2021): 839–51, https://doi.org/10.1007/s00146-020-01089-6; Andreas Matthias, “The Responsibility Gap: Ascribing Responsibility for the Actions of Learning Automata,” Ethics and Information Technology 6, no. 3 (2004): 175–83, https://doi.org/10.1007/s10676-004-3422-1; L. Righetti et al., “Lethal Autonomous Weapon Systems [Ethical, Legal, and Societal Issues],” IEEE Robotics & Automation Magazine 25, no. 1 (March 2018): 123–26, https://doi.org/10.1109/MRA.2017.2787267; Heather Roff, “To Ban or Regulate Autonomous Weapons: A US Response,” Bulletin of the Atomic Scientists 72, no. 2 (March 3, 2016): 122–24, https://doi.org/10.1080/00963402.2016.1145920; Robert Sparrow, “Killer Robots,” Journal of Applied Philosophy 24, no. 1 (February 2007): 62–77, https://doi.org/10.1111/j.1468-5930.2007.00346.x; Robert Sparrow, “Robots and Respect: Assessing the Case Against Autonomous Weapon Systems,” Ethics & International Affairs 30, no. 1 (2016): 93–116, https://doi.org/10.1017/S0892679415000647; Robert Sparrow, “Why Machines Cannot Be Moral,” AI & SOCIETY 36, no. 3 (September 2021): 685–93, https://doi.org/10.1007/s00146-020-01132-6; “Stop Killer Robots,” accessed May 1, 2022, https://www.stopkillerrobots.org/stop-killer-robots/.

  32. Robert Sparrow, “Can Machines Be People? Reflections on the Turing Triage Test.,” in Robot Ethics: The Ethical and Social Implications of Robotics. (Cambridge: MIT Press, 2011), 16; Robert Sparrow, “Why Machines Cannot Be Moral,” AI & SOCIETY 36, no. 3 (September 2021): 685–93, https://doi.org/10.1007/s00146-020-01132-6; “Stop Killer Robots,” accessed May 1, 2022, https://www.stopkillerrobots.org/stop-killer-robots/.

  33. Michael C. Horowitz, “The Ethics & Morality of Robotic Warfare: Assessing the Debate over Autonomous Weapons,” Daedalus 145, no. 4 (September 2016): 25–36, https://doi.org/10.1162/DAED_a_00409; Nathan J Lucas, “Lethal Autonomous Weapon Systems: Issues for Congress,” n.d., 31; Robert Sparrow, “Killer Robots,” Journal of Applied Philosophy 24, no. 1 (February 2007): 62–77, https://doi.org/10.1111/j.1468-5930.2007.00346.x; Robert Sparrow, “Robots and Respect: Assessing the Case Against Autonomous Weapon Systems,” Ethics & International Affairs 30, no. 1 (2016): 93–116, https://doi.org/10.1017/S0892679415000647; Robert Sparrow, “Why Machines Cannot Be Moral,” AI & SOCIETY 36, no. 3 (September 2021): 685–93, https://doi.org/10.1007/s00146-020-01132-6; “International Committee for Robot Arms Control,” n.d., www.icrac.net/about-icrac/; “Stop Killer Robots,” accessed May 1, 2022, https://www.stopkillerrobots.org/stop-killer-robots/.

  34. Colin Allen and Wendell Wallach, “Moral Machines: Contradiction in Terms or Abdication of Human Responsibility?,” in Robot Ethics: The Ethical and Social Implications of Robotics. (Cambridge: MIT Press, 2011), 55–68; Robert Sparrow, “Killer Robots,” Journal of Applied Philosophy 24, no. 1 (February 2007): 62–77, https://doi.org/10.1111/j.1468-5930.2007.00346.x; Robert Sparrow, “Robots and Respect: Assessing the Case Against Autonomous Weapon Systems,” Ethics & International Affairs 30, no. 1 (2016): 93–116, https://doi.org/10.1017/S0892679415000647; Robert Sparrow, “Why Machines Cannot Be Moral,” AI & SOCIETY 36, no. 3 (September 2021): 685–93, https://doi.org/10.1007/s00146-020-01132-6.

  35. Michael Walzer, “The Crime of War.,” in Just and Unjust Wars. (Basic Books, 2015), 21.

  36. Paul Formosa and Malcolm Ryan, “Making Moral Machines: Why We Need Artificial Moral Agents,” AI & SOCIETY 36, no. 3 (September 2021): 839–51, https://doi.org/10.1007/s00146-020-01089-6; Michael C. Horowitz, “The Ethics & Morality of Robotic Warfare: Assessing the Debate over Autonomous Weapons,” Daedalus 145, no. 4 (September 2016): 25–36, https://doi.org/10.1162/DAED_a_00409; Nathan J Lucas, “Lethal Autonomous Weapon Systems: Issues for Congress,” n.d., 31; Robert Sparrow, “Killer Robots,” Journal of Applied Philosophy 24, no. 1 (February 2007): 62–77, https://doi.org/10.1111/j.1468-5930.2007.00346.x; Robert Sparrow, “Robots and Respect: Assessing the Case Against Autonomous Weapon Systems,” Ethics & International Affairs 30, no. 1 (2016): 93–116, https://doi.org/10.1017/S0892679415000647; Robert Sparrow, “Why Machines Cannot Be Moral,” AI & SOCIETY 36, no. 3 (September 2021): 685–93, https://doi.org/10.1007/s00146-020-01132-6; “Stop Killer Robots,” accessed May 1, 2022, https://www.stopkillerrobots.org/stop-killer-robots/.

  37. Krishnan Armin, “Dangerous Futures and Arms Control.,” in Killer Robots: Legality and Ethicality of Autonomous Weapons. (London: Taylor & Francis Group, 2009), 145–65, http://ebookcentral.proquest.com/lib/gmu/detail.action?docID=5208002.; Krishnan Armin, “Ethical Considerations.,” in Killer Robots: Legality and Ethicality of Autonomous Weapons. (London: Taylor & Francis Group, 2009), 118–43, http://ebookcentral.proquest.com/lib/gmu/detail.action?docID=5208002.; Michael C. Horowitz, “The Ethics & Morality of Robotic Warfare: Assessing the Debate over Autonomous Weapons,” Daedalus 145, no. 4 (September 2016): 25–36, https://doi.org/10.1162/DAED_a_00409; Alex Leveringhaus, “What’s So Bad About Killer Robots?,” Journal of Applied Philosophy 35, no. 2 (May 2018): 341–58, https://doi.org/10.1111/japp.12200; Nathan J Lucas, “Lethal Autonomous Weapon Systems: Issues for Congress,” n.d., 31; Pranshu Verma, “The Military Wants ‘Robot Ships’ to Replace Sailors in Battle.,” The Washington Post, April 14, 2022, https://www.washingtonpost.com/technology/2022/04/14/navy-robot-ships/.