A Methodology for Quantitative AI Risk Modeling
By: Malcolm Murray , Steve Barrett , Henry Papadatos and more
Potential Business Impact:
Helps predict and prevent AI from causing harm.
Although general-purpose AI systems offer transformational opportunities in science and industry, they simultaneously raise critical concerns about safety, misuse, and potential loss of control. Despite these risks, methods for assessing and managing them remain underdeveloped. Effective risk management requires systematic modeling to characterize potential harms, as emphasized in frameworks such as the EU General-Purpose AI Code of Practice. This paper advances the risk modeling component of AI risk management by introducing a methodology that integrates scenario building with quantitative risk estimation, drawing on established approaches from other high-risk industries. Our methodology models risks through a six-step process: (1) defining risk scenarios, (2) decomposing them into quantifiable parameters, (3) quantifying baseline risk without AI models, (4) identifying key risk indicators such as benchmarks, (5) mapping these indicators to model parameters to estimate LLM uplift, and (6) aggregating individual parameters into risk estimates that enable concrete claims (e.g., X% probability of >\$Y in annual cyber damages). We examine the choices that underlie our methodology throughout the article, with discussions of strengths, limitations, and implications for future research. Our methodology is designed to be applicable to key systemic AI risks, including cyber offense, biological weapon development, harmful manipulation, and loss-of-control, and is validated through extensive application in LLM-enabled cyber offense. Detailed empirical results and cyber-specific insights are presented in a companion paper.
Similar Papers
Toward Quantitative Modeling of Cybersecurity Risks Due to AI Misuse
Computers and Society
Helps stop AI from being used for cyberattacks.
Mapping AI Benchmark Data to Quantitative Risk Estimates Through Expert Elicitation
Artificial Intelligence
Helps measure how dangerous AI can be.
An Artificial Intelligence Value at Risk Approach: Metrics and Models
Computers and Society
Helps companies manage AI dangers better.