Score: 0

How frontier AI companies could implement an internal audit function

Published: December 16, 2025 | arXiv ID: 2512.14902v1

By: Francesca Gomez , Adam Buick , Leah Ferentinos and more

Potential Business Impact:

Helps AI companies check their powerful creations safely.

Business Areas:
Artificial Intelligence Artificial Intelligence, Data and Analytics, Science and Engineering, Software

Frontier AI developers operate at the intersection of rapid technical progress, extreme risk exposure, and growing regulatory scrutiny. While a range of external evaluations and safety frameworks have emerged, comparatively little attention has been paid to how internal organizational assurance should be structured to provide sustained, evidence-based oversight of catastrophic and systemic risks. This paper examines how an internal audit function could be designed to provide meaningful assurance for frontier AI developers, and the practical trade-offs that shape its effectiveness. Drawing on professional internal auditing standards, risk-based assurance theory, and emerging frontier-AI governance literature, we analyze four core design dimensions: (i) audit scope across model-level, system-level, and governance-level controls; (ii) sourcing arrangements (in-house, co-sourced, and outsourced); (iii) audit frequency and cadence; and (iv) access to sensitive information required for credible assurance. For each dimension, we define the relevant option space, assess benefits and limitations, and identify key organizational and security trade-offs. Our findings suggest that internal audit, if deliberately designed for the frontier AI context, can play a central role in strengthening safety governance, complementing external evaluations, and providing boards and regulators with higher-confidence, system-wide assurance over catastrophic risk controls.

Page Count
28 pages

Category
Computer Science:
Computers and Society