How frontier AI companies could implement an internal audit function
By: Francesca Gomez , Adam Buick , Leah Ferentinos and more
Potential Business Impact:
Helps AI companies check their powerful creations safely.
Frontier AI developers operate at the intersection of rapid technical progress, extreme risk exposure, and growing regulatory scrutiny. While a range of external evaluations and safety frameworks have emerged, comparatively little attention has been paid to how internal organizational assurance should be structured to provide sustained, evidence-based oversight of catastrophic and systemic risks. This paper examines how an internal audit function could be designed to provide meaningful assurance for frontier AI developers, and the practical trade-offs that shape its effectiveness. Drawing on professional internal auditing standards, risk-based assurance theory, and emerging frontier-AI governance literature, we analyze four core design dimensions: (i) audit scope across model-level, system-level, and governance-level controls; (ii) sourcing arrangements (in-house, co-sourced, and outsourced); (iii) audit frequency and cadence; and (iv) access to sensitive information required for credible assurance. For each dimension, we define the relevant option space, assess benefits and limitations, and identify key organizational and security trade-offs. Our findings suggest that internal audit, if deliberately designed for the frontier AI context, can play a central role in strengthening safety governance, complementing external evaluations, and providing boards and regulators with higher-confidence, system-wide assurance over catastrophic risk controls.
Similar Papers
Evaluating AI Companies' Frontier Safety Frameworks: Methodology and Results
Computers and Society
Helps AI companies build safer, more responsible systems.
Catastrophic Liability: Managing Systemic Risks in Frontier AI Development
Computers and Society
Makes AI safer and developers responsible.
Third-party compliance reviews for frontier AI safety frameworks
Computers and Society
Checks if AI companies follow safety rules.