The LLM Mirage: Economic Interests and the Subversion of Weaponization Controls
By: Ritwik Gupta, Andrew W. Reddie
Potential Business Impact:
Focuses on AI danger, not just computer power.
U.S. AI security policy is increasingly shaped by an $\textit{LLM Mirage}$, the belief that national security risks scale in proportion to the compute used to train frontier language models. That premise fails in two ways. It miscalibrates strategy because adversaries can obtain weaponizable capabilities with task-specific systems that use specialized data, algorithmic efficiency, and widely available hardware, while compute controls harden only a high-end perimeter. It also destabilizes regulation because, absent a settled definition of "AI weaponization," compute thresholds are easily renegotiated as domestic priorities shift, turning security policy into a proxy contest over industrial competitiveness. We analyze how the LLM Mirage took hold, propose an intent-and-capability definition of AI weaponization grounded in effects and international humanitarian law, and outline measurement infrastructure based on live benchmarks across the full AI Triad (data, algorithms, compute) for weaponization-relevant capabilities.
Similar Papers
Large Language Models as a (Bad) Security Norm in the Context of Regulation and Compliance
Computers and Society
AI can make computer security unsafe and illegal.
Military AI Needs Technically-Informed Regulation to Safeguard AI Research and its Applications
Computers and Society
AI weapons could make wars start faster.
Demystify, Use, Reflect: Preparing students to be informed LLM-users
Computers and Society
Teaches students to use AI tools wisely.