Evaluating AI Companies' Frontier Safety Frameworks: Methodology and Results
By: Lily Stelling , Malcolm Murray , Simeon Campos and more
Potential Business Impact:
Helps AI companies build safer, more responsible systems.
Following the Seoul AI Safety Summit in 2024, twelve AI companies published frontier safety frameworks outlining their approaches to managing catastrophic risks from advanced AI systems. These frameworks now serve as a key mechanism for AI risk governance, utilized by regulations and governance instruments such as the EU AI Act's Code of Practice and California's Transparency in Frontier Artificial Intelligence Act. Given their centrality to AI risk management, assessments of such frameworks are warranted. Existing assessments evaluate them at a high level of abstraction and lack granularity on specific practices for companies to adopt. We address this gap by developing a 65-criteria assessment methodology grounded in established risk management principles from safety-critical industries. We evaluate the twelve frameworks across four dimensions: risk identification, risk analysis and evaluation, risk treatment, and risk governance. Companies' current scores are low, ranging from 8% to 35%. By adopting existing best practices already in use across the frameworks, companies could reach 52%. The most critical gaps are nearly universal: companies generally fail to (a) define quantitative risk tolerances, (b) specify capability thresholds for pausing development, and (c) systematically identify unknown risks. To guide improvement, we provide specific recommendations for each company and each criterion.
Similar Papers
Emerging Practices in Frontier AI Safety Frameworks
Computers and Society
Helps build safer, more responsible AI systems.
The 2025 OpenAI Preparedness Framework does not guarantee any AI risk mitigation practices: a proof-of-concept for affordance analyses of AI safety policies
Computers and Society
Lets AI company bosses release dangerous AI.
International AI Safety Report 2025: Second Key Update: Technical Safeguards and Risk Management
Computers and Society
Makes AI safer from being used for bad things.