Internal Deployment Gaps in AI Regulation
By: Joe Kwon, Stephen Casper
Frontier AI regulations primarily focus on systems deployed to external users, where deployment is more visible and subject to outside scrutiny. However, high-stakes applications can occur internally when companies deploy highly capable systems within their own organizations, such as for automating R\&D, accelerating critical business processes, and handling sensitive proprietary data. This paper examines how frontier AI regulations in the United States and European Union in 2025 handle internal deployment. We identify three gaps that could cause internally-deployed systems to evade intended oversight: (1) scope ambiguity that allows internal systems to evade regulatory obligations, (2) point-in-time compliance assessments that fail to capture the continuous evolution of internal systems, and (3) information asymmetries that subvert regulatory awareness and oversight. We then analyze why these gaps persist, examining tensions around measurability, incentives, and information access. Finally, we map potential approaches to address them and their associated tradeoffs. By understanding these patterns, we hope that policy choices around internally deployed AI systems can be made deliberately rather than incidentally.
Similar Papers
AI Behind Closed Doors: a Primer on The Governance of Internal Deployment
Computers and Society
Keeps super-smart AI from causing hidden problems.
How frontier AI companies could implement an internal audit function
Computers and Society
Helps AI companies check their powerful creations safely.
How frontier AI companies could implement an internal audit function
Computers and Society
Helps AI companies check their powerful creations safely.