Chad Lawhorn | 03/16/2024

The Impact of AI on Defense and Intelligence Systems:

Autonomous Weapons, Strategic Weapons, Missile Defense, and Command and Control Systems

Introduction

The development of artificial intelligence (AI) is sparking a transformation in military thinking, weapons, and other systems used across the defense and intelligence sectors. Integrating AI-driven technologies in these systems could fundamentally transform the dynamics of Strategic Weapons systems, Missile Defense, battlefield planning, and the nature of conflict and deterrence altogether (Johnson, 2021, 2023; Kaur, 2024). The advent of AI technology in the military strategic space necessitates a reevaluation of existing strategic cultures, international norms, and diplomatic responses to address the emerging challenges and opportunities posed by AI technologies on global security and stability. At the strategic level, future arms control efforts will likely need to account for integrating these emerging AI technologies into all strategic weaponry, communications, surveillance, and defense systems. The international community and war planners must additionally consider the implications of AI-driven technologies in autonomous systems at the battlefield level (Etzioni & Etzioni, 2017; Laid, 2020; Luca & Trager, 2024; Lamberth & Scharre, 2022, 2023; Sędkowski, 2021).

AI's role in military applications is not new. However, its rapid advancement and increasing sophistication have introduced us to the possibility of an era where autonomous systems can make life-or-death decisions without human intervention (Sędkowski, 2021; Walker, 2021). Additionally, these technologies could improve the capabilities of numerous other systems, such as military satellites, radar systems, and missile defense, adding complicated algorithms and identifying threats at a speed impossible for human operators (European Defence Agency, 2020; European Space Agency, 2023; Helfrich, 2022; Scholten & Turner, 2019). As nations work to take advantage of the strategic opportunities offered by AI technologies, the global security landscape is facing a new, challenging environment where the traditional concepts of strategic deterrence, use of force norms, power balance, and the very nature of conflicts are rapidly shifting; perhaps faster than many governments can adapt (Horowitz et al., 2018; Hunter et al., 2023; Mekonnen, 2021; Schmidt, 2022). 

This paper examines the impact of emerging AI technologies on nuclear weapons, strategic deterrence, lethal autonomous weapon systems (LAWS), and international norms and agreements, exploring the historical development of AI and autonomous systems within the defense and intelligence sectors. It assesses their current and future capabilities. This paper will examine international efforts to address these systems and how policymakers and diplomatic leaders address AI and its integration into various military systems as they debate established and future international security norms and practices. Furthermore, this research will examine the risks and opportunities associated with AI-driven autonomous systems in nuclear weapons technologies, as well as the current arguments for and against these technological advances and their integration in these systems.

Overview: AI and Autonomous Systems in the Military 

The early use of AI in military applications can be traced back to the Cold War era. As the Soviet Union and the United States competed for strategic and technological advantages, AI research primarily focused on developing advanced strategic defense systems and intelligence analysis tools (Hanson, 1987; Leese, 2023; Military Embedded Systems, 2019; Sędkowski, 2021; Slayton, 2020). The United States and the Soviet Union invested in AI to enhance their nuclear deterrence capabilities, recognizing early on that computer-based intelligence tools could provide strategic advantages in information processing, threat assessment, and decision-making speed (Hanson, 1987; Leese, 2023). By the late 20th century, the focus of AI development in the military had expanded beyond strategic defense to include autonomous lethal weapons systems (CSIS Missile Defense Project, 2023; McCormick, 2014; Mizokami, 2024; Stoner, 2009). These advancements represented a shift towards more autonomous, offensive capabilities, pushing warfare and strategic calculations into a new direction (Konaev et al., 2020; Laird, 2020; McCormick, 2014).

United States

The United States has consistently been at the forefront of integrating autonomous systems into its military capabilities, with a focus on both offensive and defensive applications (Hawley, 2017; Hiebert, 2024; McCormick, 2014; Military Embedded Systems, 2019; Mizokami, 2024; Stoner, 2009; Slayton, 2020). For example, the United States pioneered the integration of early forms of AI in precision-guided munitions, like the unique terrain mapping guidance system of the Tomahawk cruise missile and other advanced targeting capabilities (CSIS Missile Defense Project, 2023). Similarly, the development of unmanned aerial vehicles (UAVs), such as the Predator drone, marked a significant leap forward, incorporating advanced capabilities for flight control and targeting operations, with future AI advancements being integrated into UAV systems (Connor, 2018; McCormick, 2014). The United States Navy also introduced the Phalanx Close-In Weapon System (CIWS), which uses autonomous computer-controlled targeting systems to engage threats in close proximity to ships quickly (Mizokami, 2024; Stoner, 2009; USN, 2021). The United States Army also operates a variant of this system known as the C-RAM Centurion Phalanx (Army Recognition, 2023). Weapons systems have seen a significant march towards greater autonomy and AI system integration; however, missile defense and detection systems have likely seen the most advance toward AI integration over the decades (Hawley, 2017; Helfrich, 2022; Lockheed Martin, 2023; McCormick, 2014; Merwe, 2021; Scholten & Turner, 2019). 

The implementation of advanced targeting algorithms and increasingly sophisticated guidance and threat analysis capabilities in ballistic missile defense systems has been a critical development in the advancement of autonomous military systems toward true AI, particularly for the United States and its allies (Hawley, 2017; Lockheed Martin, 2023; McCormick, 2014; Merwe, 2021; Sędkowski, 2021; Vincent, 2022). Systems such as the Terminal High Altitude Area Defense (THAAD), the Patriot, and Aegis missile systems utilize these advanced targeting algorithms to detect, track, and intercept incoming ballistic missiles and other threats (DOTE, 2013; Hawley, 2017; Lockheed Martin, 2023). The jointly developed American-Israeli systems, known as the Iron Dome, also utilize advanced targeting systems to intercept a wide range of aerial threats, including missiles, drones, and even conventional artillery shells (Merwe, 2021; Sędkowski, 2021). 

The autonomous capabilities of THAAD and Patriot missile systems are rooted in their advanced radar and fire control systems, which are capable of detecting, tracking, and engaging ballistic missile threats. The system's radar can classify and prioritize threats in a highly contested environment, allowing it to launch interceptors with little human input (DOTE, 2013; Hawley, 2017). The Aegis missile system's autonomy is highlighted by its ability to engage multiple targets simultaneously under the guidance of its computer-controlled combat management system. It can automatically detect, track, and prioritize potential threats and engage them with missiles without manual intervention (Lockheed Martin, 2023 & n.d.). Similar to the Aegis system, the Iron Dome's decision-making process for intercepting an incoming projectile involves complex algorithms and data analysis, which can be considered a form of artificial intelligence. These algorithms enable the system to assess threats in real-time and make autonomous interception decisions with minimal human intervention (Merwe, 2021; Sędkowski, 2021). However, while these advancements have allowed the United States to remain dominant in this technological pursuit, it has near peers competing in the AI and autonomous weapons space (Hotowitz, 2018; Hunter et al., 2023; Schmidt, 2022).

Russia and the Soviet Union

Moscow has invested in several autonomous and AI systems over the decades. During the Cold War, the Soviet Union focused on AI for strategic defense and space exploration, aiming to match the United States' technological capabilities in warfare. Following the end of the Cold War, Russia has prioritized AI in its military modernization efforts, developing autonomous combat systems and AI-driven cyber warfare tools (Chernenko & Markotkin, 2020; Leese, 2023; Voronova, 2022). 

Interestingly, early AI research and development came from Soviet cybernetic studies. These efforts explored various applications, ranging from machine learning to advanced radar systems and pattern recognition computing. Although this research was largely theoretical, the ideas it generated have found their way into modern advancements (Kassel, 1971; Voronova, 2022). 

In the nuclear space, one Soviet system has a mythology. This so-called doomsday system, known as the "Dead Hand" or the Perimeter system, was a Soviet nuclear command, control, and communications (NC3) system designed to ensure that the Soviet Union could carry out a second strike in the event of a massive decapitating nuclear attack by the United States and its allies. Perimeter would activate a second strike using an array of sensory and communications technologies that could autonomously detect the aftermath of a catastrophic attack; this added some uncertainty for any potential aggressor and thus provided a layer of strategic deterrence (Stilwell, 2022; Thompson, 2009). Over the decades, Moscow has continued to make advancements in automation and AI integration in the military space (Chernenko & Markotkin, 2020; Hunter et al., 2023; Nocetti, 2020). This includes advancements in autonomous combat robotics, drones, as well as allegedly AI-driven nuclear-capable torpedos (Chernenko & Markotkin, 2020; Hunter et al., 2023; Kaur, 2023; Nocetti, 2020; Siu, 2022). However, Russia's advanced autonomous and AI technology and capabilities have matured significantly in recent years, particularly in comparison to China and the United States (Chernenko & Markotkin, 2020).

China

China's emergence as a significant player in the AI-military space reflects its ambitions to become a leading global technological power (Allen, 2019; Nelson & Epstein, 2022; State Council, 2017). China has invested heavily in developing AI capabilities for surveillance, cyber operations, and autonomous combat systems (Allen, 2019; Lee, 2024). The People's Liberation Army (PLA) already actively incorporates AI into its military doctrine. It aims to leverage advanced systems for information and surveillance gathering systems, cyber warfare tools, precision-strike capabilities, and defense against missile threats. AI is a core element of its modernization efforts (Allen, 2019; Nelson & Epstein, 2022; State Council, 2017). President Xi Jinping calls for integrating AI technologies into all military planning and operations (Allen, 2019; State Council, 2017). Furthermore, China's AI research and development investments aim to counter U.S. technological superiority. Projects include AI-driven missile systems and defensive capabilities designed to neutralize potential threats, including those from advanced U.S. missile defense systems (Allen, 2019; Fedasiuk, 2020; State Council, 2017). 

International Norms and Agreements

The international community has increasingly recognized the need to develop norms and agreements governing the use of AI in military applications, particularly in the context of nuclear deterrence. The rapid advancement of AI technologies presents opportunities and challenges for global security, necessitating a dialogue on how these tools should be integrated into national defense strategies and international security frameworks. Existing international treaties on nuclear weapons, such as the Treaty on the Non-Proliferation of Nuclear Weapons (NPT), provide a foundational framework but need specific provisions for the governance of AI technologies. This gap highlights the need for updated or new agreements that explicitly address the use of AI in nuclear deterrence and strategic stability (Lamberth & Scharre, 2022, 2023; Luca & Trager; Mekonnen, 2021; NTI, 2022; Shoker et al., 2024).

One of the significant challenges in forming an international consensus on AI norms and agreements is the divergent interests and security concerns of global powers. The United States, China, and Russia, among others, are investing heavily in AI for military purposes, each with different perspectives on the balance between technological advancements and regulatory measures. This divergence complicates efforts to establish uniform standards or agreements that all key players can accept. The debates within the United Nations on LAWS exemplify these difficulties, with varying opinions on the extent to which autonomous weapons should be regulated (Boulanin, 2018; Horowitz, 2018; Hunter et al., 2023; Laird, 2020; Nocetti, 2020; China State Council, 2017).

Building a robust framework of norms and agreements on AI-driven military systems becomes more complicated and challenging to reach when accounting for all the clear strategic advantages such technology could bring to defense and intelligence operations and the fact that AI technologies are dual purpose with many applications well beyond defense (Lamberth & Scharre, 2022, 2023; Rautenbach, 2022; Reiner & Wehsener, 2019; Vincent, 2022). Despite the challenges, there have been discussions on frameworks and principles that could guide the development and use of AI in military contexts, including nuclear deterrence. However, much of the regulatory talk over AI in military applications is at the individual state level; there are, so far, no clear international leaders on military AI control protocols (Gill, 2024; Paoli et al., 2020; Verdiesen et al., 2021). The United States, however, has an opportunity to bring the international community together to set such standards for using AI-driven systems in defense and intelligence systems (Gill, 2024; Kahn, 2024). Any agreement must include transparency measures, accountability standards, and verification mechanisms to ensure effective implementation. These agreements could take the form of entirely new revolutions or be built upon existing international norms and agreements (Horowitz & Scharre, 2021; Johnson, 2023; Lamberth & Scharre, 2022, 2023; Puscas, 2023; Weinbaum, 2019). These discussions are crucial for preventing an arms race in AI-driven military technologies and ensuring that AI applications in the military space do not undermine international security and stability (Boulanin, 2018; Depp & Scharre, 2024; Horowitz & Scharre, 2021; Puscas, 2023). Moreover, principles such as ensuring human control over critical decision-making processes and prohibiting the development of AI systems that cannot be reliably controlled are gaining support among a broad range of stakeholders (Gill, 2024; Horowitz & Scharre, 2021; Kahn, 2024; Konaev et al., 2020).

Future of AI Technology

As AI integration into military applications advances, speculation about future technologies and their potential applications has become a pivotal area of strategic analysis. AI technologies are expected to revolutionize military capabilities, including autonomous systems operating without human intervention, AI-driven decision-making processes for quicker response times in conflict situations, and advanced surveillance systems capable of interpreting vast amounts of data in real-time (Eckersley, 2018; Fedasiuk, 2020; Hunter et al., 2023; Johnson, 2023; Laird, 2020; Takagi, 2024). 

Significant advancements in AI integration in satellite and radar systems will be integral to national defense, providing critical early warning, surveillance, and reconnaissance capabilities. AI technologies are set to revolutionize these systems by improving the processing and analysis of data collected from space and airborne platforms. AI-driven algorithms can identify potential threats from satellite imagery much faster than human analysts, enhancing the capability to address security threats preemptively. This will significantly improve early warning strategic missile detection (Depp & Scharre, 2024; European Defence Agency, 2020; European Space Agency, 2023; Helfrich, 2022; Scholten & Turner, 2019). Additionally, AI-driven ballistic missile defense systems represent a significant development area, promising to detect, track, and intercept incoming threats faster and more accurately than ever before. Technologies such as the Aegis Ballistic Missile Defense System are being enhanced with artificial intelligence (AI) to counter complex threats, including hypersonic missiles. These systems are crucial for nuclear deterrence, providing a protective shield against potential nuclear strikes (Lockheed Martin, 2023; Vincent, 2022).

AI enhancements are being integrated into nuclear weapons technologies. For example, Russia's Poseidon nuclear torpedo technology is considered a significant part of Russia's advanced military capabilities. It is designed to create widespread radioactive contamination, rendering coastal areas uninhabitable. While the information on Russia's use of AI technology in this nuclear weapons system remains classified, AI is believed to enhance navigation, target selection, and detonation timing (Kaur, 2023; Siu, 2022; Sutton, 2022).

The integration of AI will enhance a wide range of offensive and defensive systems by improving target recognition, tracking capabilities, and interception accuracy. Machine learning algorithms can differentiate between decoys and actual warheads, predict missile trajectories, and optimize intercept strategies in real-time (Johnson, 2021, 2022, 2023). The future of AI technology in the military holds both promising opportunities and significant challenges. As the world stands on the brink of potentially transformative advancements, it is crucial to carefully navigate the ethical, strategic, and operational implications. International cooperation and dialogue are essential to establish norms and agreements that harness the benefits of AI for global security while mitigating the risks associated with its military applications. The dual-edged nature of AI in nuclear deterrence and global security dynamics underscores the need for a balanced approach that prioritizes human oversight, ethical considerations, and strategic stability (Horowitz, 2018; Hunter et al., 2023; Johns, 2023; Johnson, 2023; Mekonnen, 2021; Rashid et al., 2023).

As the United States and its near-peer powers continue to invest in AI technologies, the international community faces a dual challenge: harnessing AI's benefits for defense and security while mitigating the risks of an AI arms race and ensuring compliance with international humanitarian law. The development of AI-driven military technologies underscores the importance of strategic stability and the need for frameworks that can accommodate the rapid pace of innovation in AI (Geist & Lohn, 2018; Hammer, 2024; Hruby & Miller, 2021; Johns, 2023; Johnson, 2022; Kroenig, 2021; Laird, 2020).

The future of autonomous weapons systems and AI integration in nuclear deterrence comes with risks and opportunities. While AI has the potential to significantly enhance defensive capabilities and strategic stability, it also introduces new challenges and ethical considerations. The key to navigating this complex landscape lies in rigorous testing, international cooperation, and the establishment of moral and legal standards for the use of AI in military applications. As technology evolves, so must the frameworks governing its use, ensuring that advances in military AI contribute to, rather than undermine, global security and peace (Hruby & Miller, 2021; Johnson, 2021, 2023; Scharre, 2024). The advancements in AI military technology by the United States, China, and Russia are shaping a new era of strategic competition with significant implications for global security dynamics. Integrating AI into military capabilities introduces complexities in terms of deterrence, arms control, and the potential for escalation, highlighting the need for international dialogue and cooperation to address the challenges posed by AI in military contexts (Johnson, 2023; Laird, 2020; Mekonnen, 2021; Puscas, 2023; Schmidt, 2022).

Opportunities 

Proponents of integrating AI into weapons systems and nuclear technologies argue that AI can provide significant strategic advantages, such as improved accuracy in target identification, enhanced missile defense systems, and better predictive capabilities for threat assessment. Advocates also suggest that AI can help maintain strategic stability by bolstering deterrence, as enhanced defense systems could make the prospect of offensive action less appealing to adversaries (Chesnut et al., 2023; Eckersley, 2018; Etzioni & Etzioni, 2017; Hruby & Miller, 2021; Konaev et al., 2020). Others argue that with the advancements in the AI-military space by geopolitical adversaries of the United States, the US should consider integrating AI into our NC3 systems to ensure that these systems can respond to incoming threats as fast as the threats can respond to any defense systems thereby adding to US strategic deterrence (Lowther & Mcgiffin, 2019; Reiner & Wehsener, 2019). 

AI interaction with combat communications technologies offers significant opportunities for greater battlefield situational awareness and strategic planning. Additionally, AI can enhance autonomous land, air, and sea vehicles, allowing them to be deployed on more hazardous, complex missions, thereby keeping human personnel out of harm's way. Furthermore, integrating more advanced AI systems can enhance missile targeting and guidance systems, as well as improve reaction times for missile and gun defense systems. These advances in these systems will likely increase the survivability of military assets, be it personnel or hardware (Etzioni & Etzioni, 2017; Hunter et al., 2023; Lockheed Martin, 2023; Rashid et al., 2023).

AI's ability to keep pace with rapidly evolving threats is another opportunity presented by its integration into the military. As adversaries develop more sophisticated missile technologies, including hypersonic weapons, AI-driven systems could be crucial in detecting, tracking, and neutralizing these threats. The agility of AI algorithms in adapting to new patterns and the speed of decision-making could counterbalance advancements in offensive capabilities by adversaries, ensuring that defensive measures remain effective in the face of innovation (Hunter et al., 2023; Konaev et al., 2020; Rashid et al., 2023). 

In the strategic weapons and defense sector, AI-driven autonomous systems present significant opportunities to enhance the defensive capabilities of nuclear states. These systems could process vast amounts of data more quickly than human operators, identifying threats faster and more accurately. For example, AI-driven ballistic missile defense systems can track and intercept incoming missiles with previously unattainable precision, strengthening strategic stability and deterrence. AI-powered systems have the potential to enhance deterrence by improving strategic missile defense and NC3 systems, contributing to a security environment in which states feel less compelled to launch preemptive strikes out of fear of an imminent attack (Boulanin, 2018; Etzioni & Etzioni, 2017; Hunter et al., 2023; Johnson, 2022; Rashid et al., 2023; Takagi, 2024).

As mentioned earlier, integrating AI into satellites, radars, NC3, and other communication systems will significantly enhance these systems. AI-driven capabilities in these areas could lead to more robust cyber protections for communication networks, faster response times, earlier detection, and enhanced surveillance and monitoring systems. These systems could gain capabilities and functions that current levels of automation cannot achieve, or operate at speeds or levels of sophistication that are beyond the capabilities of human operators. AI-driven technologies have the potential to significantly enhance defense systems and intelligence capabilities while improving overall strategic deterrence (European Defence Agency, 2020; European Space Agency, 2023; Helfrich, 2022; Konaev et al., 2020; Rautenbach, 2022; Reiner & Wehsener, 2019; Scholten & Turner, 2019; Vincent, 2022). However, these emerging technologies also have significant uncertainty over their potential risks to global security and stability (Johns, 2023; Johnson, 2023; Puscas, 2023). 

Risks

The dual-use nature of AI technologies means that their development can either escalate or mitigate global security threats. On the one hand, AI-enhanced systems offer unprecedented precision, efficiency, and damage limitation capabilities, potentially reducing the likelihood of civilian casualties in conflict zones (Etzioni & Etzioni, 2017; Konaev et al., 2020). On the other hand, the speed and autonomy of AI-driven systems could lead to escalating conflicts, where decisions made in milliseconds bypass human judgment and the opportunity for de-escalation (Boulanin, 2018; Depp & Scharre, 2024; Hruby & Miller, 2021; Johns, 2023; Johnson, 2022; Laird, 2020). 

Critics of AI integration raise ethical concerns, including the potential for autonomous weapons systems to make life-or-death decisions without human oversight. There are fears that AI could lower the threshold for nuclear weapon use, as AI-driven systems might misinterpret data or be unable to navigate the complex decision-making required in nuclear deterrence. Additionally, an arms race in AI military technologies is risky, potentially leading to global instability (Geist & Lohn, 2018; Johnson, 2021, 2022, 2023; Kaur, 2024; Luca & Trager, 2024; Neely, 2024; Takagi, 2024). The complexity of AI algorithms and the possibility of unforeseen interactions in a real-world environment raise concerns about the predictability of these systems' behavior. The speed at which autonomous weapons operate could compress human decision-making times, increasing the risk of miscalculation and unintended escalation in nuclear contexts (Johns, 2023; Johnson, 2021, 2023; Laird, 2020; Neely, 2024; Rautenbach, 2022).

Additionally, concerns exist that these more advanced technologies are vulnerable to cyberattacks. In the digital age, concerns have arisen about the security of AI systems, particularly as AI cyber warfare tools become increasingly prevalent. A notable fear is that adversaries could exploit vulnerabilities using advanced cyber warfare tools to cause malfunctions or take control of these weapons. Due to their complex nature and heavy reliance on data, AI systems can become targets for cyber warfare. The potential for such breaches raises concerns about the security and stability of global nuclear arsenals. Unlike traditional cybersecurity threats, hacking AI systems involved in NC3 could lead to the unauthorized use or dissemination of sensitive operational information. The sophistication required to hack these systems suggests that only state or highly competent non-state actors could mount such attacks, especially those developing powerful AI-driven cyber tools (Ceccarelli et al., 2023; Forden, 2020; Johnson, 2021; Puscas, 2023; Rautenbach, 2022).

The United States is transitioning from the Minuteman III ICBM system to the Sentinel program, highlighting the changing cybersecurity landscape of nuclear weapons systems. The analog-based control systems of the Minuteman III provide security through obscurity, making it easier for hackers to access such systems remotely with modern digital interfaces. On the other hand, the Sentinel ICBM uses advanced digital technologies that introduce new vulnerabilities, especially if adversaries discover software vulnerabilities. These technologies could be integrated with AI for improved targeting and decision support. Therefore, there is a need for robust cybersecurity measures to protect against unauthorized access and ensure the integrity of nuclear weapons systems. However, the challenge lies in balancing the operational advantages that modern technologies provide with the potential risks they introduce (Ceccarelli et al., 2023; CSIS, 2021; Rautenbach, 2022; Reiner & Wehsener, 2019).

Defense agencies must implement comprehensive cybersecurity programs to reduce cyberattack risks. These protocols should focus on encryption, intrusion detection systems, and rigorous testing against cyber threats. Furthermore, the development and deployment of AI systems in the nuclear domain should adhere to the principles of security by design, ensuring that these systems are inherently resilient against hacking attempts from the outset. Working with international cybersecurity experts and allies can enhance the security of these systems by sharing best practices and threat intelligence. Despite these measures, the dynamic nature of cyber threats requires constant vigilance and adaptation to counteract new hacking techniques and vulnerabilities that may emerge over time (Ceccarelli et al., 2023; Pradeesh, 2023; Rautenbach, 2022).

Managing Risks and Policy Recommendations

Integrating AI into weapons systems, notably nuclear weapons, necessitates a thoughtful approach that prioritizes transparency, responsibility, and accountability. Given AI’s potential to transform deterrence, warfare, and the balance of power, states should adopt clear policies outlining norms and controls over the potential abuse of AI in defense space to ensure international stability, especially in the nuclear order (Boulanin, 2018; Chesnut et al., 2023; Depp & Scharre, 2024; Gill, 2024; Johnson, 2021, 2022, 2023). These policies must emphasize human oversight in decision-making processes, particularly in situations that could lead to the use of nuclear weapons. Establishing international norms that promote transparency in the development and deployment of AI-enabled military technologies can foster trust among states. Such transparency would facilitate verifying compliance with international agreements, reducing suspicions and the risk of escalation due to misunderstandings (Mekonnen, 2021; Puscas, 2023; Verdiesen et al., 2021; Walker, 2021). 

In reality, autonomous and increasingly AI-driven systems in the defense and intelligence sectors are firmly part of the strategic culture. States will continue to invest in and develop these systems for the strategic advantages they actively provide (Lamberth & Scharre, 2022, 2023). For this reason, international norms and cooperation must focus on the responsible use of these emerging technologies. However, even the most effective agreement on the use of these advanced systems might not be able to mitigate the potential for these technologies to create imbalances in the international order (Eckersley, 2018; Hunter et al., 2023; Lamberth & Scharre, 2022, 2023). 

Furthermore, integrating AI into nuclear weapons systems can offer operational enhancements and pose new security risks. With the deployment of the Sentinel ICBM, addressing cybersecurity challenges presented by digital and AI technologies will be crucial. A multifaceted approach is necessary to ensure the security of nuclear weapons systems in the era of AI and hacking, encompassing technological solutions, international cooperation, and continuous innovation in cybersecurity practices (The Wall Street Journal, 2024; Neenan, 2024; Osborn, 2022).

Developing and implementing risk reduction measures is crucial to mitigate the risks associated with AI in nuclear contexts. These measures should include robust AI safety standards, regular testing and validation of AI systems under realistic conditions, and the establishment of fail-safes that ensure human control over nuclear decision-making processes. Furthermore, bilateral and multilateral dialogues dedicated to AI safety in military contexts can enhance understanding and cooperation among nuclear-armed states. Trust and confidence-building measures will be vital to ensuring global stability as these systems become more advanced. Dialogues could lead to developing shared safety protocols and emergency deconfliction communication lines to prevent AI-related misunderstandings from escalating into conflicts (Hruby & Miller, 2021; Johnson, 2021, 2023; Lamberth & Scharre, 2022, 2023; Rashid et al., 2023; Rautenbach, 2022; Reiner & Wehsener, 2019).

The evolving role of AI in the defense considerations of both nuclear and non-nuclear states underscores the importance of international norms and agreements in guiding the responsible use of AI in military contexts. Establishing clear guidelines for the application of AI in both nuclear and conventional military operations is essential to prevent an escalation of tensions and avoid the risk of unintended conflicts. Efforts to create a shared understanding among the international community regarding the ethical use of AI, transparency in AI military projects, and mechanisms for accountability can help mitigate the risks associated with integrating AI into the military. These norms can serve as a foundation for future agreements that address AI's unique challenges, ensuring its benefits are realized while minimizing its potential threats to global security (Horowitz & Scharre, 2021; Johnson, 2023; Konaev et al., 2020; Kroenig, 2021; Lamberth & Scharre, 2022, 2023; Puscas, 2023; Weinbaum, 2019).

International cooperation and dialogue play pivotal roles in managing the risks posed by potentially destabilizing technologies, such as nuclear weapons; this will also remain true for AI (Johnson, 2021, 2022, 2023; Lamberth & Scharre, 2022, 2023). Establishing a serious international forum for AI, autonomous weapons, and nuclear security could facilitate the exchange of ideas, best practices, and concerns among states, experts, and international organizations. This forum would serve as a platform for negotiating agreements on the ethical use of AI in military contexts, including nuclear deterrence, and for discussing potential limitations on AI military applications that pose significant risks of escalation. By fostering a collaborative approach to AI governance, states can work together to ensure that AI advances global security rather than undermines it (Geist & Lohn, 2018; Hruby & Miller, 2021; Johnson, 2022, 2023; Lamberth & Scharre, 2022, 2023).

In addition to global initiatives, bilateral and multilateral agreements between key military powers could play a crucial role in shaping the future of AI-driven deterrence. These agreements could focus on AI-driven missile defense systems, cyber warfare, and space-based reconnaissance technologies. By directly addressing the concerns of involved parties, such agreements could pave the way for broader international norms and reduce the risks associated with AI in military contexts (Horowitz & Scharre, 2021; Johnson, 2023; Lamberth & Scharre, 2022, 2023; Puscas, 2023; Weinbaum, 2019).

Developing international norms and agreements for AI in military and nuclear contexts is an evolving process, reflecting the dynamic nature of technology and international relations. While challenges persist in reaching a global consensus, the ongoing efforts by international bodies, states, and non-state actors to establish guiding principles and frameworks represent critical steps toward responsible AI governance. The future of AI-driven deterrence will depend on the ability of the international community to collaborate, innovate, and navigate the complex interplay between technological advancement and global security imperatives (Gill, 2024; Kahn, 2024; Lamberth & Scharre, 2022, 2023; Johnson, 2023; Mekonnen, 2021; Puscas, 2023; Weinbaum, 2019).

Given the reliance on digital technologies and the potential integration of AI, enhancing the cybersecurity of nuclear command and control systems is paramount. Cyber vulnerabilities could be exploited to undermine the reliability and safety of these systems, increasing the risk of unauthorized or accidental use of nuclear weapons. States should invest in cybersecurity measures, including encryption, intrusion detection systems, and regular security audits, to protect against cyber threats. International collaboration on cybersecurity standards and practices can also strengthen the security of nuclear command and control systems against cyberattacks (Ceccarelli et al., 2023; Hruby & Miller, 2021; Konaev et al., 2020; Osborn, 2022; Puscas, 2023; Schmidt, 2022).

Policymakers face complex strategic choices as they seek to integrate AI into their defense strategies. The potential for AI to alter the strategic balance requires a careful assessment of the benefits and risks associated with its military use. States must navigate the dual nature of using AI for national security while avoiding actions that could provoke an arms race or escalate tensions. This balancing act involves investing in AI technologies and engaging in international diplomacy to establish norms and agreements that promote stability and prevent conflict. The strategic calculus must, therefore, include considerations of international cooperation and the pursuit of shared security objectives (Horowitz, 2018; Horowitz et al., 2018; Johnson, 2021, 2023; Konaev et al., 2020; Kroenig, 2021; Lamberth & Scharre, 2022, 2023; Schmidt, 2022).

Managing the escalation risks associated with AI in defense requires a combination of technological safeguards, diplomatic strategies, and international cooperation. States can navigate the complexities of integrating AI into their nuclear strategies by prioritizing transparency, developing risk reduction measures, enhancing cybersecurity, and promoting the use of AI for peaceful purposes. The international community must engage in ongoing dialogue and collaboration to address the challenges and harness the opportunities presented by AI, ensuring that technological advancements contribute to global peace and security (Hruby & Miller, 2021; Luca & Trager, 2024; Laird, 2020; Lamberth & Scharre, 2022, 2023; Neely, 2024).

Final Thoughts

The advent of AI is transforming the strategic landscape for both nuclear and non-nuclear states, altering calculations regarding deterrence, defense, and warfare. AI's capabilities, ranging from improved surveillance and reconnaissance to autonomous weapons systems, can shift the balance of power and change the dynamics of international relations (Horowitz, 2018; Hunter et al., 2023; Lamberth & Scharre, 2022, 2023; Mekonnen, 2021; Schmidt, 2022). This shift could have far-reaching implications for nuclear weapons and strategic defense systems as states reassess their security needs and deterrence strategies in light of AI-driven capabilities. AI can enhance the effectiveness and survivability of nuclear arsenals for nuclear states (Horowitz, 2018; Johnson, 2021, 2022, 2023). 

The integration of AI into military systems is viewed as a means to strengthen strategic deterrence by enhancing decision-making processes, improving the accuracy of threat assessments, and optimizing the management of nuclear forces (Johnson, 2021, 2023). However, significant concerns have been raised about AI-driven data misinterpretations, risk of escalation, and even cyber vulnerabilities to these complex systems (Ceccarelli et al., 2023; Chesnut et al., 2023; Johnson, 2023; Laird, 2020; Puscas, 2023). The potential for AI to inadvertently lower the threshold for nuclear use or any category of war by making decision-makers overconfident in their capabilities or misunderstanding an adversary's actions is a concern that states must address (Boulanin, 2018; Horowitz & Scharre, 2021; Johns, 2023; Johnson, 2023; Konaev et al., 2020).

As adversaries develop new autonomous and AI-driven technologies in the defense and intelligence space, AI will increasingly play a crucial role in countering these threats. AI can aid in developing countermeasures and defense strategies by simulating various attack scenarios and optimizing response strategies. This capability is vital for maintaining a credible deterrence posture and ensuring national security in an era of rapid technological advancement (Chesnut et al., 2023; Depp & Scharre, 2024; Horowitz et al., 2018; Johnson, 2023; Laird, 2020; Reiner & Wehsener, 2019). However, integrating artificial intelligence into military and strategic domains presents a double-edged sword, offering unprecedented capabilities while posing significant challenges to global security and stability (Depp & Scharre, 2024; Hammes, 2023; Hruby & Miller, 2021; Rashid et al., 2023). 

AI can enhance the capability of these systems to detect, differentiate, and intercept incoming missiles, including those carrying nuclear warheads. This advancement is crucial in defending against complex attacks and reducing the incentive for adversaries to pursue aggressive nuclear postures. However, the reliance on AI introduces risks related to system reliability, decision-making errors, and vulnerability to cyberattacks. The debate among experts is ongoing, with some advocating for the careful development and deployment of AI in missile defense. In contrast, others caution against potential pitfalls, especially the escalation risks associated with automated decision-making in nuclear contexts (Depp & Scharre, 2024; Hammes, 2023; Hruby & Miller, 2021; Hawley, 2017; Laird, 2020; Rashid et al., 2023; Vincent, 2022).

While AI could enhance the precision and effectiveness of defense systems, reducing the likelihood of collateral damage and potentially deterring aggression through superior capabilities, the opacity of AI decision-making processes and the potential for unintended escalation present significant challenges to maintaining strategic stability (Boulanin, 2018; Eckersley, 2018; Hawley, 2017; Helfrich, 2022; Johnson, 2021, 2023).

International norms and agreements play a crucial role in navigating this new terrain. As AI capabilities evolve, the existing military conduct and arms control frameworks must adapt to remain relevant and practical (Lamberth & Scharre, 2022, 2023; Mekonnen, 2021). Innovative policy solutions and proactive international collaboration are necessary to shape a future in which AI enhances, rather than undermines, global security. As AI technologies continue to evolve, the international community must remain vigilant in assessing their impact on military strategies and international relations. 

Developing international norms and agreements will ensure that AI contributes to global stability rather than exacerbating tensions or sparking an arms race. By fostering dialogue and cooperation, states can harness the benefits of AI while mitigating its risks, navigating the AI-driven strategic landscape with prudence and foresight (Lamberth & Scharre, 2022, 2023; Luca & Trager, 2024). Addressing these complex dynamics requires technological solutions, such as developing robust AI safety and control mechanisms, as well as diplomatic efforts, including negotiating international treaties and establishing confidence-building measures (Lamberth & Scharre, 2022, 2023; Luca & Trager, 2024; Puscas, 2023).

Artificial intelligence (AI) and autonomous systems will continue to proliferate in nearly every aspect of our lives (Rainie & Anderson, 2023). AI can create significant instabilities in the global order as it becomes more prominent in strategic technologies. AI-driven systems will enhance the capabilities of both major and minor powers and boost the capabilities of even non-state actors. For these reasons, the international community must work together to establish norms and agreements on the use of these systems, ensure the deconfliction of communication networks, and prepare for their inevitable misuse (Geist & Lohn, 2018; Johnson, 2023; Lamberth & Scharre, 2022, 2023; Puscas, 2023).

Sources:

  1. Allen, Gregory C. 2019. “Understanding China’s AI Strategy.” Center for a New American Security. https://www.cnas.org/publications/reports/understanding-chinas-ai-strategy (April 5, 2024).

  2. Army Recognition. 2023. “C-RAM Centurion Phalanx Land-Based Air Defense Weapon System Data.” Army Recognition. https://armyrecognition.com/united_states_us_army_artillery_vehicles_system_uk/centurion_c-ram_land-based_weapon_system_phalanx_technical_data_sheet_specifications_pictures_video.html (April 5, 2024).

  3. Boulanin, Vincent. 2018. “AI & Global Governance: AI and Nuclear Weapons - Promise and Perils of AI for Nuclear Stability.” United Nations University. https://unu.edu/cpr/blog-post/ai-global-governance-ai-and-nuclear-weapons-promise-and-perils-ai-nuclear-stability (March 31, 2024).

  4. Ceccarelli, Michael, Rachael Harris, Badr Mahmoud, Diego Perez, and Shyla Sharma. 2023. Impact of Cyberattacks on Functionality of NC3 Systems and Strategic Deterrence. Carnegie Mellon University. Report. doi:10.1184/R1/21964826.v1.

  5. Chernenko, Elena, and Nikolai Markotkin. 2020. “Developing Artificial Intelligence in Russia: Objectives and Reality.” Carnegie Endowment for International Peace. https://carnegiemoscow.org/commentary/82422 (April 1, 2024).

  6. Chesnut, Mary, Tim Ditter, Anya Fink, Larry Lewis, and McDonnell. 2023. Artificial Intelligence in Nuclear Operations. Center for Naval Analyses. https://www.cna.org/reports/2023/04/ai-in-nuclear-operations (April 1, 2024).

  7. China State Council. 2017. “New Generation of Artificial Intelligence Development Plan.” https://flia.org/wp-content/uploads/2017/07/A-New-Generation-of-Artificial-Intelligence-Development-Plan-1.pdf.

  8. Connor, Roger. 2018. “The Predator, a Drone That Transformed Military Combat.” National Air and Space Museum. https://airandspace.si.edu/stories/editorial/predator-drone-transformed-military-combat (April 5, 2024).

  9. CSIS. 2021. “Minuteman III.” Missile Threat. https://missilethreat.csis.org/missile/minuteman-iii/ (November 21, 2023).

  10. CSIS Missile Defense Project. 2023. “Tomahawk.” Missile Threat, Center for Strategic and International Studies. https://missilethreat.csis.org/missile/tomahawk/ (April 5, 2024).

  11. Depp, Micchael, and Paul Scharre. 2024. “Artificial Intelligence and Nuclear Stability.” War on the Rocks. https://warontherocks.com/2024/01/artificial-intelligence-and-nuclear-stability/ (March 31, 2024).

  12. DOTE. 2013. “Ballistic Missle Defense Systems - Terminal High-Altitude Area Defense (THAAD).” https://www.dote.osd.mil/Portals/97/pub/reports/FY2013/bmds/2013thaad.pdf?ver=2019-08-22-111313-127.

  13. Eckersley, Peter. 2018. “The Cautious Path to Strategic Advantage: How Militaries Should Plan for AI.” Electronic Frontier Foundation. https://www.eff.org/wp/cautious-path-strategic-advantage-how-militaries-should-plan-ai (April 1, 2024).

  14. Etzioni, Amitai, and Oren Etzioni. 2017. “Pros and Cons of Autonomous Weapons Systems.” MILITARY REVIEW.

  15. European Defence Agency. 2020. “Stronger Communication & Radar Systems with Help of AI.” https://eda.europa.eu/news-and-events/news/2020/08/31/stronger-communication-radar-systems-with-help-of-ai# (April 1, 2024).

  16. European Space Agency. 2023. “Artificial Intelligence in Space.” https://www.esa.int/Enabling_Support/Preparing_for_the_Future/Discovery_and_Preparation/Artificial_intelligence_in_space (April 1, 2024).

  17. Fedasiuk, Ryan. 2020. Chinese Perspectives on AI and Future Military Capabilities. Center for Security and Emerging Technology. doi:10.51593/20200022.

  18. Forden, Geoffrey. 2020. “The New Synergy Between Arms Control and Nuclear Command and Control.” Arms Control Today. https://www.armscontrol.org/act/2020-01/features/new-synergy-between-arms-control-nuclear-command-control (April 1, 2024).

  19. Geist, Edward, and Andrew Lohn. 2018. How Might Artificial Intelligence Affect the Risk of Nuclear War? RAND Corporation. doi:10.7249/PE296.

  20. Gill, Jaspreet. 2024. “DoD Hoping to Build International Cooperation on Responsible AI, Autonomy.” Breaking Defense. https://breakingdefense.sites.breakingmedia.com/2024/01/dod-hoping-to-build-international-cooperation-on-responsible-ai-autonomy/ (April 6, 2024).

  21. Hammer, Mathias. 2024. “AI Models Consistently Favor Using Nuclear Weapons in War Games.” Semafor. https://www.semafor.com/article/02/09/2024/ai-models-consistently-opt-for-nuclear-weapons-in-war-games (March 31, 2024).

  22. Hanson, Rickey L. 1987. “The Evolution of Artificial Intelligence and Expert Computer Systems in the Army.”

  23. Hawley, John K. 2017. Automation and the Patriot Air and Missile Defense System. Center for a New American Security.

  24. Helfrich, Emma. 2022. “AI-Based Space Technology to Utilize Satellite and Sensor Data.” Military Embedded Systems. https://militaryembedded.com/comms/communications/ai-based-space-technology-to-utilize-satellite-and-sensor-data (April 1, 2024).

  25. Hiebert, Kyle. 2024. “The United States Quietly Kick-Starts the Autonomous Weapons Era.” Centre for International Governance Innovation. https://www.cigionline.org/articles/the-united-states-quietly-kick-starts-the-autonomous-weapons-era/ (April 1, 2024).

  26. Horowitz, Michael C. 2018. “Artificial Intelligence, International Competition, and the Balance of Power.” Texas National Security Review. https://tnsr.org/2018/05/artificial-intelligence-international-competition-and-the-balance-of-power/ (March 31, 2024).

  27. Horowitz, Michael C, Gregory C Allen, Elsa B Kania, and Paul Scharre. 2018. “Strategic Competition in an Era of Artificial Intelligence.”

  28. Horowitz, Michael, and Paul Scharre. 2021. AI and International Stability: Risks and Confidence-Building Measures. Center for a New American Security.

  29. Hruby, Jill, and M. Nina Miller. 2021. “Assessing and Managing the Benefits and Risks of Artificial Intelligence in Nuclear-Weapon Systems.” The Nuclear Threat Initiative. https://www.nti.org/analysis/articles/assessing-and-managing-the-benefits-and-risks-of-artificial-intelligence-in-nuclear-weapon-systems/ (April 1, 2024).

  30. Hunter, Lance Y., Craig D. Albert, Christopher Henningan, and Josh Rutland. 2023. “The Military Application of Artificial Intelligence Technology in the United States, China, and Russia and the Implications for Global Security.” Defense & Security Analysis 39(2): 207–32. doi:10.1080/14751798.2023.2210367.

  31. Johns, Eliana. 2023. “AI May Not Launch a Nuke, but It May Convince You To.” Outrider. https://outrider.org/nuclear-weapons/articles/ai-may-not-launch-nuke-it-may-convince-you (March 31, 2024).

  32. Johnson, James. 2021. “Rethinking Nuclear Deterrence in the Age of Artificial Intelligence.” Modern War Institute. https://mwi.westpoint.edu/rethinking-nuclear-deterrence-in-the-age-of-artificial-intelligence/ (March 31, 2024).

  33. Johnson, James. 2022. “AI, Autonomy, and the Risk of Nuclear War.” War on the Rocks. https://warontherocks.com/2022/07/ai-autonomy-and-the-risk-of-nuclear-war/ (April 1, 2024).

  34. Johnson, James. 2023. AI and the Bomb: Nuclear Strategy and Risk in the Digital Age. Oxford University Press. doi:10.1093/oso/9780192858184.001.0001.

  35. Kahn, Lauren. 2024. “How the United States Can Set International Norms for Military Use of AI.” Lawfare. https://www.lawfaremedia.org/article/how-the-united-states-can-set-international-norms-for-military-use-of-ai (April 6, 2024).

  36. Kassel, Simon. 1971. Soviet Cybernetics Research: A Preliminary Study of Organizations and Personalities. RAND Corporation. https://www.rand.org/content/dam/rand/pubs/reports/2007/R909.pdf.

  37. Kaur, Silky. 2023. “One Nuclear-Armed Poseidon Torpedo Could Decimate a Coastal City. Russia Wants 30 of Them.” Bulletin of the Atomic Scientists. https://thebulletin.org/2023/06/one-nuclear-armed-poseidon-torpedo-could-decimate-a-coastal-city-russia-wants-30-of-them/ (April 1, 2024).

  38. Kaur, Silky. 2024. “Artificial Intelligence and the Evolving Landscape of Nuclear Strategy.” The Union of Concerned Scientists - The Equation. https://blog.ucsusa.org/science-blogger/artificial-intelligence-and-the-evolving-landscape-of-nuclear-strategy/ (March 31, 2024).

  39. Konaev, Margarita, Husanjot Chahal, Ryan Fedasiuk, Tina Huang, and Ilya Rahkovsky. 2020. U.S. Military Investments in Autonomy and AI: A Strategic Assessment. Center for Security and Emerging Technology. https://cset.georgetown.edu/publication/u-s-military-investments-in-autonomy-and-ai-a-strategic-assessment/ (April 1, 2024).

  40. Kroenig, Matthew. 2021. “Will Emerging Technology Cause Nuclear War?: Bringing Geopolitics Back In.” Strategic Studies Quarterly.

  41. Laird, Burgess. 2020. The Risks of Autonomous Weapons Systems for Crisis Stability and Conflict Escalation in Future U.S.-Russia Confrontations. https://www.rand.org/pubs/commentary/2020/06/the-risks-of-autonomous-weapons-systems-for-crisis.html (April 1, 2024).

  42. Lamberth, Megan, and Paul Scharre. 2022. Artificial Intelligence and Arms Control. https://www.cnas.org/publications/reports/artificial-intelligence-and-arms-control (March 31, 2024).

  43. Lamberth, Megan, and Paul Scharre. 2023. “Arms Control for Artificial Intelligence.” Texas National Security Review. https://tnsr.org/2023/05/arms-control-for-artificial-intelligence/ (March 31, 2024).

  44. Lee, Lizzi C. 2024. “Implications of China’s AI Strategy: State Engineering, Domestic Challenges, and Global Competition.” Asia Society. https://asiasociety.org/policy-institute/implications-chinas-ai-strategy-state-engineering-domestic-challenges-and-global-competition (April 5, 2024).

  45. Leese, Bryan. 2023. “The Cold War Computer Arms Race.” Journal of Advanced Military Studies 14(2): 102–20. doi:10.21140/mcuj.20231402006.

  46. Lockheed Martin. 2023. “Artificial Intelligence and Aegis: The Future Is Here.” Lockheed Martin. https://www.lockheedmartin.com/en-us/news/features/2023/artificial-intelligence-and-aegis-the-future-is-here.html (March 31, 2024).

  47. Lowther, Adam, and Curtis Mcgiffin. 2019. “America Needs a ‘Dead Hand.’” War on the Rocks. https://warontherocks.com/2019/08/america-needs-a-dead-hand/ (April 6, 2024).

  48. Luca, Laura M., and Robert F. Trager. 2024. “Killer Robots Are Here—and We Need to Regulate Them.” Foreign Policy. https://foreignpolicy.com/2022/05/11/killer-robots-lethal-autonomous-weapons-systems-ukraine-libya-regulation/ (April 1, 2024).

  49. McCormick, Ty. 2014. “Lethal Autonomy: A Short History – Foreign Policy.” Foreign Policy. https://foreignpolicy.com/2014/01/24/lethal-autonomy-a-short-history/ (March 31, 2024).

  50. Mekonnen, Daniel. 2021. “The Potential Use of Artificial Intelligence in a Nuclear Weapon Context and the Need to Advance a New Set of Norms.” In Nuclear Non-Proliferation in International Law - Volume VI: Nuclear Disarmament and Security at Risk – Legal Challenges in a Shifting Nuclear World, eds. Jonathan L. Black-Branch and Dieter Fleck. The Hague: T.M.C. Asser Press, 305–29. doi:10.1007/978-94-6265-463-1_12.

  51. Merwe, Joanna van der. 2021. “Iron Dome Shows AI’s Risks and Rewards.” Center for European Policy Analysis. https://cepa.org/article/iron-dome-shows-ais-risks-and-rewards/ (April 1, 2024).

  52. Military Embedded Systems. 2019. “Artificial Intelligence Timeline.” https://militaryembedded.com/ai/machine-learning/artificial-intelligence-timeline (March 31, 2024).

  53. Mizokami, Kyle. 2024. “The Navy’s Missile-Killing Gatling Gun Is a Weapon of Last Resort—And It Just Made Its First Score.” Popular Mechanics. https://www.popularmechanics.com/military/weapons/a46615851/navy-phalanx-cisw-weapon-of-last-resort/.

  54. Neely, Brian. 2024. “Council Post: Navigating The Risks Of AI Weaponization.” Forbes. https://www.forbes.com/sites/forbestechcouncil/2024/03/08/navigating-the-risks-of-ai-weaponization/ (April 1, 2024).

  55. Neenan, Alexandra G. 2024. “Defense Primer: LGM-35A Sentinel Intercontinental Ballistic Missile.”

  56. Nelson, Amy J., and Gerald L. Epstein. 2022. “The PLA’s Strategic Support Force and AI Innovation.” https://www.brookings.edu/articles/the-plas-strategic-support-force-and-ai-innovation-china-military-tech/ (April 5, 2024).

  57. Nocetti, Julien. 2020. The Outsider: Russia in the Race for Artificial Intelligence. French Institute of International Relations.

  58. NTI. 2022. “NPT.” The Nuclear Threat Initiative. https://www.nti.org/education-center/treaties-and-regimes/treaty-on-the-non-proliferation-of-nuclear-weapons/ (April 1, 2024).

  59. Osborn, Kris. 2022. “Pentagon ‘Cyber-Hardens’ New ICBM to Counter Enemy Hackers.” Warrior Maven: Center for Military Modernization. https://warriormaven.com/global-security/pentagon-cyber-hardens-new-icbm-to-counter-enemy-hackers (April 1, 2024).

  60. Paoli, Giacomo Persi, Kerstin Vignard, David Danks, and Paul Meyer. 2020. Modernizing Arms Control: Exploring Responses to the Use of AI in Military Decision-Making. UNIDIR.

  61. Pradeesh, Jai. 2023. “Council Post: Adversarial Attacks On AI Systems.” Forbes. https://www.forbes.com/sites/forbestechcouncil/2023/07/27/adversarial-attacks-on-ai-systems/ (April 1, 2024).

  62. Puscas, Ioana. 2023. “AI and International Security.” The United Nations Institute for Disarmament Research.

  63. Rainie, Lee, and Janna Anderson. 2023. The Future of Human Agency. Pew Research Center. https://www.pewresearch.org/internet/2023/02/24/the-future-of-human-agency/ (April 7, 2024).

  64. Rashid, Adib Bin, Ashfakul Karim Kausik, Ahamed Al Hassan Sunny, and Mehedy Hassan Bappy. 2023. “Artificial Intelligence in the Military: An Overview of the Capabilities, Applications, and Challenges” ed. Yu-an Tan. International Journal of Intelligent Systems 2023: 1–31. doi:10.1155/2023/8676366.

  65. Rautenbach, Peter. 2022. “Artificial Intelligence and Nuclear Command, Control, & Communications: The Risks of Integration.” https://forum.effectivealtruism.org/posts/BGFk3fZF36i7kpwWM/artificial-intelligence-and-nuclear-command-control-and-1 (April 1, 2024).

  66. Reiner, Philip, and Alexa Wehsener. 2019. “The Real Value of Artificial Intelligence in Nuclear Command and Control.” War on the Rocks. https://warontherocks.com/2019/11/the-real-value-of-artificial-intelligence-in-nuclear-command-and-control/ (April 1, 2024).

  67. Scharre, Paul. 2024. “The Perilous Coming Age of AI Warfare.” Foreign Affairs. https://www.foreignaffairs.com/ukraine/perilous-coming-age-ai-warfare (April 1, 2024).

  68. Schmidt, Eric. 2022. “AI, Great Power Competition & National Security.” Daedalus 151(2): 288–98. doi:10.1162/daed_a_01916.

  69. Scholten, Ulrich, and Dawn M. Turner. 2019. “Machine Learning and Artificial Intelligence in Radar Technology.” SkyRadar. https://www.skyradar.com/blog/machine-learning-and-artifical-intelligence-in-radar-technology (April 1, 2024).

  70. Sędkowski, Wiktor. 2021. “Artificial Intelligence on the Battlefield.” Warsaw Institute. https://warsawinstitute.org/artificial-intelligence-battlefield/ (April 1, 2024).

  71. Shoker, Sarah, Andrew Reddie, and Leah Walker. 2024. New Tools Are Needed to Address the Risks Posed by AI-Military Integration. The Lawfare Institute. https://www.lawfaremedia.org/article/new-tools-are-needed-to-address-the-risks-posed-by-ai-military-integration (March 31, 2024).

  72. Siu, Thomas L. 2022. Autonomous Nuclear Torpedoes Usher in a Dangerous Future. U.S. Naval Institute. https://www.usni.org/magazines/proceedings/2022/may/autonomous-nuclear-torpedoes-usher-dangerous-future (April 1, 2024).

  73. Slayton, Rebecca. 2020. “The Promise and Risks of Artificial Intelligence: A Brief History.” War on the Rocks. http://warontherocks.com/2020/06/the-promise-and-risks-of-artificial-intelligence-a-brief-history/ (March 31, 2024).

  74. Stilwell, Blake. 2022. “Russia’s ‘Dead Hand’ Is a Soviet-Built Nuclear Doomsday Device.” Military.com. https://www.military.com/history/russias-dead-hand-soviet-built-nuclear-doomsday-device.html (April 5, 2024).

  75. Stoner, Robert H. 2009. “R2D2 with Attitude: The Story of the Phalanx Close-In Weapons.” NavWeaps. http://www.navweaps.com/index_tech/tech-103.php (April 5, 2024).

  76. Sutton, H. I. 2022. “Russia’s New ‘Poseidon’ Super-Weapon: What You Need To Know.” Naval News. https://www.navalnews.com/naval-news/2022/03/russias-new-poseidon-super-weapon-what-you-need-to-know/ (April 1, 2024).

  77. Takagi, Koichiro. 2024. “Artificial Intelligence and Future Warfare.” Hudson. https://www.hudson.org/defense-strategy/artificial-intelligence-future-warfare (March 31, 2024).

  78. The $130B Plan to Replace the U.S.’s Nuclear Missiles. 2024. https://www.youtube.com/watch?v=VTQ8yZSyrC0 (April 1, 2024).

  79. Thompson, Nicholas. 2009. “Inside the Apocalyptic Soviet Doomsday Machine.” Wired. https://www.wired.com/2009/09/mf-deadhand/ (April 5, 2024).

  80. USN. 2021. “MK 15 - Phalanx Close-In Weapon System (CIWS).” United States Navy. https://www.navy.mil/Resources/Fact-Files/Display-FactFiles/Article/2167831/mk-15-phalanx-close-in-weapon-system-ciws/https%3A%2F%2Fwww.navy.mil%2FResources%2FFact-Files%2FDisplay-FactFiles%2FArticle%2F2167831%2Fmk-15-phalanx-close-in-weapon-system-ciws%2F (April 5, 2024).

  81. Verdiesen, Ilse, Filippo Santoni De Sio, and Virginia Dignum. 2021. “Accountability and Control Over Autonomous Weapon Systems: A Framework for Comprehensive Human Oversight.” Minds and Machines 31(1): 137–63. doi:10.1007/s11023-020-09532-9.

  82. Vincent, Brandi. 2022. “Missile Defense Agency Taps AI and Machine Learning to Prepare for Next-Gen Threats.” DefenseScoop. https://defensescoop.com/2022/12/14/missile-defense-agency-taps-ai-and-machine-learning-to-prepare-for-next-gen-threats/ (March 31, 2024).

  83. Voronova, Victoria. 2022. “Modern Artificial Intelligence Technologies As An Echo of Soviet Cybernetic Science.” https://papers.ssrn.com/abstract=4250387 (March 31, 2024).

  84. Walker, Paddy. 2021. “Leadership Challenges from the Deployment of Lethal Autonomous Weapon Systems: How Erosion of Human Supervision Over Lethal Engagement Will Impact How Commanders Exercise Leadership.” The RUSI Journal 166(1): 10–21. doi:10.1080/03071847.2021.1915702.

  85. Weinbaum, Cortney. 2019. A Code of Conduct for AI in Defense Should Be an Extension of Other Military Codes.https://www.rand.org/pubs/commentary/2019/09/a-code-of-conduct-for-ai-in-defense-should-be-an-extension.html (April 6, 2024).