Military AI Needs Technically-Informed Regulation to Safeguard AI Research and its Applications

Published: 26 Sept 2025, Last Modified: 29 Oct 2025NeurIPS 2025 Position Paper TrackEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Military AI, AI Safety, AI Policy, Social Impacts of AI, Military AI Policy, AI Weapons, AI Ethics, Autonomous Weapons Systems, AI Risks, AI Governance, Dual-Use AI Research, Autonomous Weapons
TL;DR: We argue that lethal autonomous weapon systems that use AI are a unique class of weapon system with unique risks that needs novel regulation informed by the AI community in order to mitigate those risks.
Abstract: Military weapon systems and command-and-control infrastructure augmented by artificial intelligence (AI) have seen rapid development and deployment in recent years. However, the sociotechnical impacts of AI on combat systems, military decision-making, and the norms of warfare have been understudied. We focus on a specific subset of lethal autonomous weapon systems (LAWS) that use AI for targeting or battlefield decisions. We refer to this subset as AI-powered lethal autonomous weapon systems (AI-LAWS) and argue that they introduce novel risks—including unanticipated escalation, poor reliability in unfamiliar environments, and erosion of human oversight—all of which threaten both military effectiveness and the openness of AI research. These risks cannot be addressed by high-level policy alone; effective regulation must be grounded in the technical behavior of AI models. We argue that AI researchers must be involved throughout the regulatory lifecycle. Thus, we propose a clear, behavior-based definition of AI-LAWS—systems that introduce unique risks through their use of modern AI—as a foundation for technically grounded regulation, given that existing frameworks do not distinguish them from conventional LAWS. Using this definition, we propose several technically-informed policy directions and invite greater participation from the AI research community in military AI policy discussions.
Lay Summary: We argue that recently developed military weapon and command systems using frontier AI are a categorically new type of weapon system with unique risks and regulatory needs. These systems have been deployed internationally in growing numbers and variety in recent years, and existing regulation and oversight mechanisms are insufficient and nonspecific. We discuss the importance of technical AI researchers contributing to this under-discussed and fast moving topic, and we outline specific definitions and policy proposals to aid future debate.
Submission Number: 561
Loading