Military AI Needs Technically-Informed Regulation to Safeguard AI Research and its Applications
By: Riley Simmons-Edler , Jean Dong , Paul Lushenko and more
Potential Business Impact:
AI weapons could make wars start faster.
Military weapon systems and command-and-control infrastructure augmented by artificial intelligence (AI) have seen rapid development and deployment in recent years. However, the sociotechnical impacts of AI on combat systems, military decision-making, and the norms of warfare have been understudied. We focus on a specific subset of lethal autonomous weapon systems (LAWS) that use AI for targeting or battlefield decisions. We refer to this subset as AI-powered lethal autonomous weapon systems (AI-LAWS) and argue that they introduce novel risks -- including unanticipated escalation, poor reliability in unfamiliar environments, and erosion of human oversight -- all of which threaten both military effectiveness and the openness of AI research. These risks cannot be addressed by high-level policy alone; effective regulation must be grounded in the technical behavior of AI models. We argue that AI researchers must be involved throughout the regulatory lifecycle. Thus, we propose a clear, behavior-based definition of AI-LAWS -- systems that introduce unique risks through their use of modern AI -- as a foundation for technically grounded regulation, given that existing frameworks do not distinguish them from conventional LAWS. Using this definition, we propose several technically-informed policy directions and invite greater participation from the AI research community in military AI policy discussions.
Similar Papers
Technical Risks of (Lethal) Autonomous Weapons Systems
Computers and Society
Robot weapons can be unpredictable and dangerous.
Development of management systems using artificial intelligence systems and machine learning methods for boards of directors (preprint, unofficial translation)
Computers and Society
Makes AI leaders follow rules fairly and safely.
Governing AI R&D: A Legal Framework for Constraining Dangerous AI
Computers and Society
Helps governments make AI rules without lawsuits.