Chad Lawhorn | December 2024 (Updated June 2025)
U.S.–China Strategic Competition in AI and Emerging Technologies: Navigating Defense Innovation and Stability
Introduction and Strategic Framing
Artificial intelligence (AI) is rapidly becoming the centerpiece of strategic competition between the United States and China, especially in the defense and intelligence sectors. This contest is not limited to military applications; it also shapes global standards, economic power, and technological sovereignty. The integration of AI into military platforms, surveillance networks, and cyber operations is redefining conventional power dynamics. As both nations accelerate AI adoption in critical systems, the risk of miscalculation or destabilizing escalation grows. While technological innovation offers transformative potential for both national security and economic growth, it also introduces complex risks. Chief among these risks is the potential for rapid escalation, misinterpretation, and a destabilizing arms race in dual-use technologies. The policy challenge is clear: How can the U.S. sustain its competitive edge in AI and emerging technologies while promoting stability and avoiding the breakdown of existing deterrence frameworks?
Historical and Doctrinal Context
Both nations have framed AI as a critical enabler of strategic superiority. The U.S. Department of Defense's AI Strategy and National Defense Strategy calls for accelerating AI integration across command and control, intelligence analysis, logistics, and autonomous systems. China's New Generation AI Development Plan (2017) embeds AI development in national rejuvenation goals, aiming to achieve global leadership by 2030 (Webster et al. 2017). In many ways, the doctrinal posturing around AI mirrors Cold War-era strategic competition. Just as nuclear weapons reshaped deterrence theory and international diplomacy, AI has the potential to upend traditional escalation ladders, compress decision timelines, and challenge notions of human control in conflict (Horowitz and Scharre 2021).
Comparative Capability Assessment
The United States maintains an edge in AI research, high-performance computing, and the deployment of combat-proven autonomous systems. Its defense-industrial base and alliances (e.g., AUKUS, NATO) further augment its capabilities. However, China has closed significant gaps through rapid scale-up in surveillance infrastructure, facial recognition AI, and integrated civil-military development (Allen 2019). In domains such as AI-enabled NC3 (nuclear command, control, and communications), open-source analysis suggests that the U.S. emphasizes layered safeguards and human-in-the-loop decision models, while China is investing in speed and automation (Horowitz and Scharre 2021; Insikt Group 2025).
In the cyber domain, both countries leverage AI for offensive and defensive operations, including anomaly detection, automated threat hunting, and synthetic deception. However, China's domestic data abundance and looser ethical constraints offer unique advantages in surveillance-driven AI (Chan et al. 2025). Conversely, the U.S. maintains leadership in AI chips, system integration, and software frameworks (Atlantic Council, 2024; U.S.–China Economic and Security Review Commission 2019, pp. 407–442). As both countries scale AI-enabled military applications, the strategic balance is increasingly determined not only by technological sophistication but also by the speed of integration, doctrinal flexibility, and institutional trust in autonomous systems.
Risks and Opportunities
AI offers the potential for improved early warning systems, real-time threat assessment, and non-kinetic deterrence. Predictive analytics could aid in crisis de-escalation and enhance the credibility of second-strike capabilities. However, the same technologies increase the risk of inadvertent escalation. AI-enabled missile defense or autonomous early warning could misinterpret benign activity or spoofed data as hostile intent, triggering rapid retaliation (Scharre 2018). Cyber vulnerabilities and data poisoning attacks could further undermine confidence in automated systems (Zhang and Murdick 2023). Moreover, overreliance on AI could lower the threshold for conflict initiation or foster a false sense of strategic invincibility, particularly if adversaries doubt the credibility or restraint of AI-driven decision architectures (Horowitz 2018).
The integration of AI into autonomous weapons raises additional challenges. Without robust oversight mechanisms, AI systems may operate beyond intended parameters, creating legal and ethical dilemmas. While AI-enhanced ISR (intelligence, surveillance, and reconnaissance) platforms can reduce ambiguity and improve situational awareness, their effectiveness hinges on the quality of training data, their resilience to spoofing, and the role of human judgment in interpretation (Allen 2020). This makes human-in-the-loop protocols not merely technical but strategic requirements for stability.
Norms and Governance Options
Existing frameworks, such as the Wassenaar Arrangement and the UN Group of Governmental Experts (GGE) on Lethal Autonomous Weapon Systems (LAWS), offer limited traction in regulating the military applications of AI. These regimes lack enforceability, specificity, and buy-in from all key players, especially China and Russia (Boulanin and Verbruggen 2017). A more tailored governance architecture is urgently needed, one that strikes a balance between national security imperatives and international stability.
This governance should take both hard law and soft law approaches. Binding mechanisms, such as export controls on critical AI components and dual-use algorithms, can slow down the proliferation of AI and improve accountability. Simultaneously, voluntary norms focused on human control, transparency, and escalation safeguards can foster responsible behavior even in the absence of formal treaties.
To move forward, the U.S. should prioritize:
Bilateral transparency measures and pre-notification protocols for AI weapons testing and cyber operations.
Multilateral dialogues to establish "human-in-the-loop" and "human-on-the-loop" requirements for lethal systems.
Joint development of cybersecurity and fail-safe protocols for AI-driven command-and-control infrastructure.
Monitoring mechanisms adapted from arms control practices, such as algorithm verification and trusted third-party audits, to enhance trust in AI-enabled platforms.
Incentivizing compliance is key. One model is the Missile Technology Control Regime (MTCR), which utilizes export access as a means of enforcing restraint. Similar principles could apply to AI, tying access to sensitive datasets, chips, and simulation tools to adherence to transparency and safety norms.
Adaptive governance must also be a central principle. Given the pace of AI development, rigid frameworks will quickly become outdated. International bodies, modeled after the IAEA or ICAO, could maintain living registries of military-relevant AI uses, oversee testing protocols, and coordinate responses to violations.
Finally, the private sector plays a critical role. Many cutting-edge AI defense applications originate in commercial labs, not government arsenals. Ensuring alignment between industry and national security interests requires both incentives and regulation. Establishing ethical guardrails for dual-use technologies will require collaboration across civil-military boundaries, potentially through public-private consortia focused on AI safety, red teaming, and certification.
Policy Advocacy and Recommendations
The United States should lead by example in crafting a responsible and forward-looking AI defense strategy. This begins with embedding AI governance into national security planning, not as an afterthought but as a core strategic pillar.
International Actions:
Convene senior-level dialogues with China focused on AI and strategic stability, building on Track 1.5 and 2 dialogues to reduce misperceptions and prevent inadvertent escalation.
Promote AI safety and governance norms through NATO, the Quad, the Global Partnership on AI (GPAI), and bilateral channels. Shared values on autonomy and human oversight must anchor these efforts.
Support international AI red-team exchanges and testing frameworks, potentially modeled on arms verification regimes. These mechanisms can build trust, detect failure modes, and promote transparency.
Align AI strategies with alliance interoperability goals, ensuring that future AI-enabled systems can operate securely, ethically, and efficiently across partner militaries. Norm convergence should be a strategic objective.
Domestic Actions:
Establish a federal AI oversight office with dedicated authority over safety testing, red teaming, and standards development for all dual-use defense systems.
Establish joint DoD-NIST-AI research centers focused on scalable testing, auditability, and simulation environments for battlefield AI systems.
Expand public-private partnerships that bring together startups, defense primes, and academia to accelerate innovation while safeguarding ethical standards.
Mandate algorithmic transparency and audit trails for AI systems used in warfighting, surveillance, and command structures, especially those involving autonomous decision loops.
Invest in adaptive military doctrine development, ensuring that strategic concepts evolve in tandem with the emergence of new AI capabilities and challenges.
Launch a comprehensive AI workforce initiative, integrating technical training, ethics, and operational fluency into the professional development of military, intelligence, and acquisition personnel.
Cost Considerations and Resource Alignment
Implementing the proposed oversight, testing, and interoperability frameworks will require sustained funding and cross-agency coordination. Initial costs may include establishing new DoD–NIST AI research centers, developing interoperable data standards, and funding public-private red teaming efforts. However, these investments are modest compared to the potential cost of strategic miscalculation, system failure, or an unchecked AI arms race. Aligning the budget with national defense innovation priorities and congressional authorization through the NDAA process will be essential to ensure resource sustainability.
These recommendations aim to preserve the U.S. strategic advantage while promoting a safer and more predictable global AI environment. By embedding ethics, resilience, and interoperability into the DNA of AI integration, the U.S. can lead in both capability and responsibility.
Final Thoughts
The future of AI in defense is not just a technological contest but a governance challenge. As U.S.–China competition intensifies, the dual goals of strategic advantage and global stability must be reconciled. The United States can lead by advancing a model that blends innovation with foresight, supporting commercial development, safeguarding national security, and promoting international norms that reduce the risk of AI-induced conflict. Balancing ambition with accountability will be essential to navigating this new frontier in great power competition.
To address these challenges, the United States must establish a robust architecture for AI oversight grounded in technical expertise, interagency cooperation, and international collaboration. Specific steps include expanding public-private partnerships for AI risk assessment, promoting export control regimes for military-relevant algorithms, and enhancing information-sharing channels with allies to detect and mitigate AI vulnerabilities in real time. Addressing AI-related instability will also require investment in Track 1.5 and Track 2 dialogues with Chinese academic and policy communities to build mutual understanding and reduce perception gaps.
Additionally, U.S. policymakers must confront the accelerating convergence of civilian and military AI by enacting adaptable regulatory frameworks that recognize innovation cycles while establishing robust oversight. This includes establishing a federal AI oversight office with authority over safety testing, red teaming, and standards development for all dual-use systems. The defense sector must collaborate closely with academia and the private sector to ensure technical integrity and the ethical deployment of its systems.
Measuring Policy Impact and Strategic Effectiveness
Clear metrics for evaluating AI governance and defense integration are necessary to ensure accountability and policy impact. Success indicators should include:
The establishment and operationalization of a federal AI oversight body.
The number and frequency of bilateral and trilateral confidence-building engagements focused on AI stability.
The adoption of algorithmic audit trails and human-in-the-loop standards across DoD programs.
Increased interoperability and norm convergence among allies and partners.
Demonstrated resilience of AI-enabled systems during red-teaming or stress-testing exercises.
Annual reporting mechanisms, either through the Government Accountability Office (GAO) or a dedicated congressional AI oversight committee, could further institutionalize transparency, track progress, and provide corrective feedback loops over time.
Ultimately, global AI governance will depend on American leadership in shaping inclusive and transparent mechanisms that reflect democratic values. While near-term consensus with China may be limited, the United States can build coalitions with allies and like-minded partners to set baseline norms on autonomy, targeting, and escalation control that shape the future trajectory of military AI integration. Proactive governance, rather than reactive crisis management, will be the cornerstone of strategic stability in the AI era.
Sources:
Allen, Gregory C. 2019. "Understanding China's AI Strategy: Clues to Chinese Strategic Thinking on Artificial Intelligence and National Security." Washington, DC: Center for a New American Security.
Allen, Gregory C. 2020. Understanding AI Technology. Washington, DC: Joint Artificial Intelligence Center and Center for Security and Emerging Technology.
Atlantic Council. 2024. Commission on Defense Innovation Adoption Final Report. Washington, DC: Atlantic Council.
Boulanin, Vincent, and Maaike Verbruggen. 2017. Mapping the Development of Autonomy in Weapon Systems. Solna, Sweden: Stockholm International Peace Research Institute.
Chan, Kyle, Gregory Smith, Jimmy Goodrich, Gerard DiPippo, and Konstantin F. Pilz. 2025. Full Stack: China’s Evolving Industrial Policy for AI. RAND Corporation Perspectives PE‑A4012‑1. Santa Monica, CA: RAND Corporation.
Horowitz, Michael C. 2018. "The Promise and Peril of Military Applications of Artificial Intelligence." Bulletin of the Atomic Scientists, April 23, 2018.
Horowitz, Michael C., and Paul Scharre. 2021. AI and International Stability: Risks and Confidence-Building Measures. Washington, DC: Center for a New American Security.
Insikt Group. 2025. Measuring the US‑China AI Gap. Recorded Future Insikt Group research report, May 8. Arlington, VA: Recorded Future.
Scharre, Paul. 2018. Army of None: Autonomous Weapons and the Future of War. New York: W. W. Norton & Company.
U.S.–China Economic and Security Review Commission. 2019. “Section 2: U.S.–China Competition in Emerging Technologies.” In 2019 Annual Report to Congress, 407–442. Washington, DC: U.S.–China Economic and Security Review Commission.
Webster, Graham, Rogier Creemers, Elsa Kania, and Paul Triolo. 2017. "Full Translation: China's 'New Generation Artificial Intelligence Development Plan' (2017)." Stanford DigiChina. August 1, 2017. https://digichina.stanford.edu/work/full-translation-chinas-new-generation-artificial-intelligence-development-plan-2017/.
Zhang, Helen Toner, and Dewey Murdick. 2023. Defense or Diffusion? Open-Source AI in the U.S.–China Competition. Washington, DC: Center for Security and Emerging Technology.