Score: 0

Evaluating AI Companies' Frontier Safety Frameworks: Methodology and Results

Published: December 1, 2025 | arXiv ID: 2512.01166v1

By: Lily Stelling , Malcolm Murray , Simeon Campos and more

Potential Business Impact:

Helps AI companies build safer, more responsible systems.

Business Areas:
Artificial Intelligence Artificial Intelligence, Data and Analytics, Science and Engineering, Software

Following the Seoul AI Safety Summit in 2024, twelve AI companies published frontier safety frameworks outlining their approaches to managing catastrophic risks from advanced AI systems. These frameworks now serve as a key mechanism for AI risk governance, utilized by regulations and governance instruments such as the EU AI Act's Code of Practice and California's Transparency in Frontier Artificial Intelligence Act. Given their centrality to AI risk management, assessments of such frameworks are warranted. Existing assessments evaluate them at a high level of abstraction and lack granularity on specific practices for companies to adopt. We address this gap by developing a 65-criteria assessment methodology grounded in established risk management principles from safety-critical industries. We evaluate the twelve frameworks across four dimensions: risk identification, risk analysis and evaluation, risk treatment, and risk governance. Companies' current scores are low, ranging from 8% to 35%. By adopting existing best practices already in use across the frameworks, companies could reach 52%. The most critical gaps are nearly universal: companies generally fail to (a) define quantitative risk tolerances, (b) specify capability thresholds for pausing development, and (c) systematically identify unknown risks. To guide improvement, we provide specific recommendations for each company and each criterion.

Page Count
368 pages

Category
Computer Science:
Computers and Society