Chad Lawhorn | December 2024 (Updated June 2025)
U.S.–China Strategic Competition in AI and Emerging Technologies: Navigating Defense Innovation and Stability
Introduction and Strategic Framing
As a rapidly evolving field, artificial intelligence (AI) is increasingly viewed as a central driver of strategic competition between the United States and China, particularly in defense, intelligence, and dual-use technology sectors. This competition is not confined to the battlefield; it extends to the shaping of global standards, economic influence, and technological sovereignty. From an external policy research perspective, it is clear that the integration of AI into military platforms, surveillance networks, and cyber operations is beginning to reshape conventional power dynamics and deterrence strategy.
Both nations are accelerating AI adoption in critical systems, raising concerns about miscalculation, misinterpretation, and potential escalation. While the transformative potential of AI in national security and economic development is widely recognized, it also presents serious risks, chief among them the destabilizing effects of an unchecked AI arms race and the erosion of human decision-making in conflict scenarios.
This memo examines the evolving U.S.–China technological rivalry through the lens of strategic stability and defense innovation. It asks a critical policy question: How can the United States maintain its competitive advantage in AI and emerging technologies while safeguarding strategic stability and reinforcing responsible norms?
Historical and Doctrinal Context
Both nations have framed AI as a critical enabler of strategic superiority. The U.S. Department of Defense's AI Strategy and National Defense Strategy calls for accelerating AI integration across command and control, intelligence analysis, logistics, and autonomous systems. China's New Generation AI Development Plan (2017) embeds AI development in national rejuvenation goals, aiming to achieve global leadership by 2030 (Webster et al. 2017). In many ways, the doctrinal posturing around AI mirrors Cold War-era strategic competition. Just as nuclear weapons reshaped deterrence theory and international diplomacy, AI has the potential to upend traditional escalation ladders, compress decision timelines, and challenge notions of human control in conflict (Horowitz and Scharre 2021).
Comparative Capability Assessment
The United States maintains an edge in AI research, high-performance computing, and the deployment of combat-proven autonomous systems. Its defense-industrial base and alliances further augment its capabilities. However, China has closed significant gaps through rapid scale-up in surveillance infrastructure, facial recognition AI, and integrated civil-military development (Allen 2019). In domains such as AI-enabled NC3 (nuclear command, control, and communications), open-source analysis suggests that the U.S. emphasizes layered safeguards and human-in-the-loop decision models, while China is investing in speed and automation (Horowitz and Scharre 2021; Insikt Group 2025).
In the cyber domain, both countries leverage AI for offensive and defensive operations, including anomaly detection, automated threat hunting, and synthetic deception. However, China's domestic data abundance and looser ethical constraints offer unique advantages in surveillance-driven AI (Chan et al. 2025). Conversely, the U.S. maintains leadership in AI chips, system integration, and software frameworks (Atlantic Council, 2024; U.S.–China Economic and Security Review Commission 2019). As both countries scale AI-enabled military applications, the strategic balance is increasingly determined not only by technological sophistication but also by the speed of integration, doctrinal flexibility, and institutional trust in autonomous systems.
Risks and Opportunities
AI offers the potential for improved early warning systems, real-time threat assessment, and non-kinetic deterrence. Predictive analytics could aid in crisis de-escalation and enhance the credibility of second-strike capabilities. However, the same technologies increase the risk of inadvertent escalation. AI-enabled missile defense or autonomous early warning could misinterpret benign activity or spoofed data as hostile intent, triggering rapid retaliation (Scharre 2018). Cyber vulnerabilities and data poisoning attacks could further undermine confidence in automated systems (Zhang and Murdick 2023). Relying too extensively on AI could create environments where conflicts start more easily and create an incorrect belief in strategic invincibility. This is especially true if opponents doubt how trustworthy or cautious AI systems are in making decisions. (Horowitz 2018).
The integration of AI into autonomous weapons raises additional challenges. Without robust oversight, AI systems may operate beyond intended parameters, creating legal and ethical dilemmas. While AI-enhanced ISR (intelligence, surveillance, and reconnaissance) platforms can reduce ambiguity and improve situational awareness, their effectiveness depends on the quality of training data, their resilience to spoofing, and the role of human judgment in interpretation (Allen 2020). This makes human-in-the-loop protocols not merely technical but strategic requirements for stability.
Norms and Governance Options
Research shows that existing frameworks, such as the Wassenaar Arrangement and the UN Group of Governmental Experts (GGE) on Lethal Autonomous Weapon Systems (LAWS), offer limited traction in regulating the military applications of AI. These regimes lack enforceability, specificity, and buy-in from all key players, especially China and Russia (Boulanin and Verbruggen 2017). Officials might consider a more tailored governance architecture, one that strikes a balance between national security imperatives and international stability.
This governance structure could take both complex law and soft law approaches. Export controls on critical AI components and dual-use algorithms can slow down the proliferation of AI and improve accountability. Simultaneously, voluntary norms focused on human control, transparency, and escalation safeguards can foster responsible behavior even in the absence of formal treaties.
To promote long-term stability, analysts and experts have proposed that the U.S. could prioritize
Bilateral transparency measures and pre-notification protocols for AI weapons testing and cyber operations.
Multilateral dialogues to establish "human-in-the-loop" and "human-on-the-loop" requirements for lethal systems.
Joint development of cybersecurity and fail-safe protocols for AI-driven command-and-control infrastructure.
Monitoring mechanisms adapted from arms control practices, such as algorithm verification and trusted third-party audits, to enhance trust in AI-enabled platforms.
Encouraging compliance with emerging AI governance norms will likely depend on well-designed incentives. One example to consider is the Missile Technology Control Regime (MTCR), which fosters restraint by controlling exports. Although AI poses different challenges, we can adapt similar ideas by tying access to essential data, advanced chips, or simulation tools to a commitment to transparency and safety standards.
Given the rapid pace of AI development, flexibility will be essential. Traditional arms control models may struggle to keep up, making adaptive governance a critical concept. International bodies, potentially modeled in part after institutions like the IAEA or ICAO, might one day support stability by maintaining up-to-date registries of military-relevant AI uses, promoting testing protocols, and coordinating responses to norm violations.
The private sector will also play an increasingly central role. Many of the most advanced AI applications with defense implications originate in commercial settings, not government arsenals. Ensuring alignment between private innovation and national security goals may require a mix of incentives and voluntary frameworks. Public-private collaboration, through consortia focused on AI safety, red teaming, and ethical certification, could help build trust and responsible innovation across sectors.
Policy Advocacy and Recommendations
The United States should lead by example in crafting a responsible and forward-looking AI defense strategy. This begins with embedding AI governance into national security planning, not as an afterthought but as a core strategic pillar.
International Actions:
Convene senior-level dialogues with China focused on AI and strategic stability, building on Track 1.5 and 2 dialogues to reduce misperceptions and prevent inadvertent escalation.
Promote AI safety and governance norms through NATO, the Quad, the Global Partnership on AI (GPAI), and bilateral channels. Shared values on autonomy and human oversight must anchor these efforts.
Support international AI red-team exchanges and testing frameworks, potentially modeled on arms verification regimes. These mechanisms can build trust, detect failure modes, and promote transparency.
Align AI strategies with alliance interoperability goals, ensuring that future AI-enabled systems can operate securely, ethically, and efficiently across partner militaries.
Domestic Actions:
Establish a federal AI oversight office with dedicated authority over safety testing, red teaming, and standards development for all dual-use defense systems.
Establish joint research centers focused on scalable testing, auditability, and simulation environments for battlefield AI systems.
Expand public-private partnerships that bring together startups, defense primes, and academia to accelerate innovation while safeguarding ethical standards.
Mandate algorithmic transparency and audit trails for AI systems used in warfighting, surveillance, and command structures, especially those involving autonomous decision loops.
Invest in adaptive military doctrine development, ensuring that strategic concepts evolve in tandem with the emergence of new AI capabilities and challenges.
Launch a comprehensive AI workforce initiative, integrating technical training, ethics, and operational fluency into the professional development of military, intelligence, and acquisition personnel.
Cost Considerations and Resource Alignment
Implementing oversight, testing, and interoperability frameworks will likely require sustained funding and coordinated efforts across agencies. Potential costs include the establishment of new AI research centers involving the Department of Defense and NIST, the development of interoperable data standards, and the support of public-private red teaming initiatives. While these investments are not insignificant, they may be relatively modest when compared to the potential consequences of strategic miscalculation, system failure, or the risks associated with an unregulated AI arms race. Aligning resources with defense innovation priorities, including coordination through the NDAA process, would help promote long-term sustainability.
The recommendations outlined in this memo are intended to support the preservation of the U.S. strategic edge while contributing to a more stable and predictable global AI landscape. Embedding principles such as ethics, resilience, and interoperability into AI integration could allow the United States to demonstrate leadership not only in technological capability but also in responsible governance.
Final Thoughts
The future of AI in defense presents not only a technological competition but also a significant governance challenge. As U.S.–China strategic rivalry deepens, there is a growing need to reconcile the pursuit of national advantage with the imperative of global stability. From a policy research standpoint, the United States appears well-positioned to help shape an approach that blends innovation with foresight, one that supports commercial development, protects national security interests, and promotes norms aimed at reducing the risk of AI-driven conflict. Striking a balance between ambition and accountability may prove essential in managing this evolving frontier in international security. From a policy research and external analysis standpoint, the United States appears well-positioned to lead on AI innovation and safeguards. However, clear challenges do exist.
To help address these challenges, the United States could explore the development of a robust AI oversight framework grounded in technical expertise, interagency coordination, and international engagement. Possible steps include expanding public-private partnerships for AI risk assessment, promoting export control regimes tailored to military-relevant algorithms, and strengthening information-sharing mechanisms with allied countries to better detect and respond to AI vulnerabilities. Greater investment in Track 1.5 and Track 2 dialogues with Chinese academic and policy communities may also help build mutual understanding and reduce misperceptions over time.
Policymakers might also consider how best to manage the accelerating convergence of civilian and military AI. One approach could involve the creation of adaptable regulatory mechanisms that keep pace with innovation while ensuring appropriate oversight. This could include the establishment of a dedicated federal body tasked with overseeing safety testing, red teaming, and standards development for dual-use systems. In parallel, closer collaboration between the defense sector, academia, and industry may support the ethical and technically sound development of AI-enabled capabilities.
Measuring Policy Impact and Strategic Effectiveness
Clear metrics for evaluating AI governance and defense integration are necessary to ensure accountability and policy impact. Success indicators might include:
The establishment and operationalization of a federal AI oversight body.
The number and frequency of bilateral and trilateral confidence-building engagements focused on AI stability.
The adoption of algorithmic audit trails and human-in-the-loop standards across DoD programs.
Increased interoperability and norm convergence among allies and partners.
Demonstrated resilience of AI-enabled systems during red-teaming or stress-testing exercises.
Annual reporting mechanisms, either through the Government Accountability Office (GAO) or a dedicated congressional AI oversight committee, could further institutionalize transparency, track progress, and provide corrective feedback loops over time.
While further insider insights will shape implementation feasibility, external research suggests that, ultimately, global AI governance will likely depend on American leadership in shaping inclusive and transparent mechanisms that reflect democratic values. While near-term consensus with China may be limited, the United States can build coalitions with allies and like-minded partners to set baseline norms on autonomy, targeting, and escalation control that shape the future trajectory of military AI integration. Proactive governance, rather than reactive crisis management, will be the cornerstone of strategic stability in the AI era.
Sources:
Allen, Gregory C. 2019. "Understanding China's AI Strategy: Clues to Chinese Strategic Thinking on Artificial Intelligence and National Security." Washington, DC: Center for a New American Security.
Allen, Gregory C. 2020. Understanding AI Technology. Washington, DC: Joint Artificial Intelligence Center and Center for Security and Emerging Technology.
Atlantic Council. 2024. Commission on Defense Innovation Adoption Final Report. Washington, DC: Atlantic Council.
Boulanin, Vincent, and Maaike Verbruggen. 2017. Mapping the Development of Autonomy in Weapon Systems. Solna, Sweden: Stockholm International Peace Research Institute.
Chan, Kyle, Gregory Smith, Jimmy Goodrich, Gerard DiPippo, and Konstantin F. Pilz. 2025. Full Stack: China’s Evolving Industrial Policy for AI. RAND Corporation Perspectives PE‑A4012‑1. Santa Monica, CA: RAND Corporation.
Horowitz, Michael C. 2018. "The Promise and Peril of Military Applications of Artificial Intelligence." Bulletin of the Atomic Scientists, April 23, 2018.
Horowitz, Michael C., and Paul Scharre. 2021. AI and International Stability: Risks and Confidence-Building Measures. Washington, DC: Center for a New American Security.
Insikt Group. 2025. Measuring the US‑China AI Gap. Recorded Future Insikt Group research report, May 8. Arlington, VA: Recorded Future.
Scharre, Paul. 2018. Army of None: Autonomous Weapons and the Future of War. New York: W. W. Norton & Company.
U.S.–China Economic and Security Review Commission. 2019. “Section 2: U.S.–China Competition in Emerging Technologies.” In 2019 Annual Report to Congress, 407–442. Washington, DC: U.S.–China Economic and Security Review Commission.
Webster, Graham, Rogier Creemers, Elsa Kania, and Paul Triolo. 2017. "Full Translation: China's 'New Generation Artificial Intelligence Development Plan' (2017)." Stanford DigiChina. August 1, 2017. https://digichina.stanford.edu/work/full-translation-chinas-new-generation-artificial-intelligence-development-plan-2017/.
Zhang, Helen Toner, and Dewey Murdick. 2023. Defense or Diffusion? Open-Source AI in the U.S.–China Competition. Washington, DC: Center for Security and Emerging Technology.