Score: 0

Mapping AI Benchmark Data to Quantitative Risk Estimates Through Expert Elicitation

Published: March 6, 2025 | arXiv ID: 2503.04299v2

By: Malcolm Murray , Henry Papadatos , Otter Quarks and more

Potential Business Impact:

Helps measure how dangerous AI can be.

Business Areas:
Predictive Analytics Artificial Intelligence, Data and Analytics, Software

The literature and multiple experts point to many potential risks from large language models (LLMs), but there are still very few direct measurements of the actual harms posed. AI risk assessment has so far focused on measuring the models' capabilities, but the capabilities of models are only indicators of risk, not measures of risk. Better modeling and quantification of AI risk scenarios can help bridge this disconnect and link the capabilities of LLMs to tangible real-world harm. This paper makes an early contribution to this field by demonstrating how existing AI benchmarks can be used to facilitate the creation of risk estimates. We describe the results of a pilot study in which experts use information from Cybench, an AI benchmark, to generate probability estimates. We show that the methodology seems promising for this purpose, while noting improvements that can be made to further strengthen its application in quantitative AI risk assessment.

Page Count
23 pages

Category
Computer Science:
Artificial Intelligence