Preliminary suggestions for rigorous GPAI model evaluations
By: Patricia Paskov , Michael J. Byun , Kevin Wei and more
Potential Business Impact:
Makes AI tests fairer and easier to repeat.
This document presents a preliminary compilation of general-purpose AI (GPAI) evaluation practices that may promote internal validity, external validity and reproducibility. It includes suggestions for human uplift studies and benchmark evaluations, as well as cross-cutting suggestions that may apply to many different evaluation types. Suggestions are organised across four stages in the evaluation life cycle: design, implementation, execution and documentation. Drawing from established practices in machine learning, statistics, psychology, economics, biology and other fields recognised to have important lessons for AI evaluation, these suggestions seek to contribute to the conversation on the nascent and evolving field of the science of GPAI evaluations. The intended audience of this document includes providers of GPAI models presenting systemic risk (GPAISR), for whom the EU AI Act lays out specific evaluation requirements; third-party evaluators; policymakers assessing the rigour of evaluations; and academic researchers developing or conducting GPAI evaluations.
Similar Papers
Securing External Deeper-than-black-box GPAI Evaluations
Computers and Society
Tests AI safely without seeing its secrets.
Is General-Purpose AI Reasoning Sensitive to Data-Induced Cognitive Biases? Dynamic Benchmarking on Typical Software Engineering Dilemmas
Human-Computer Interaction
AI can make mistakes like people.
Bench-2-CoP: Can We Trust Benchmarking for EU AI Compliance?
Artificial Intelligence
Tests AI for dangers like losing control.