Score: 0

Preliminary suggestions for rigorous GPAI model evaluations

Published: July 22, 2025 | arXiv ID: 2508.00875v1

By: Patricia Paskov , Michael J. Byun , Kevin Wei and more

Potential Business Impact:

Makes AI tests fairer and easier to repeat.

This document presents a preliminary compilation of general-purpose AI (GPAI) evaluation practices that may promote internal validity, external validity and reproducibility. It includes suggestions for human uplift studies and benchmark evaluations, as well as cross-cutting suggestions that may apply to many different evaluation types. Suggestions are organised across four stages in the evaluation life cycle: design, implementation, execution and documentation. Drawing from established practices in machine learning, statistics, psychology, economics, biology and other fields recognised to have important lessons for AI evaluation, these suggestions seek to contribute to the conversation on the nascent and evolving field of the science of GPAI evaluations. The intended audience of this document includes providers of GPAI models presenting systemic risk (GPAISR), for whom the EU AI Act lays out specific evaluation requirements; third-party evaluators; policymakers assessing the rigour of evaluations; and academic researchers developing or conducting GPAI evaluations.

Page Count
24 pages

Category
Computer Science:
Computers and Society