Eval Factsheets: A Structured Framework for Documenting AI Evaluations
By: Florian Bordes , Candace Ross , Justine T Kao and more
Potential Business Impact:
Makes AI tests clearer and easier to trust.
The rapid proliferation of benchmarks has created significant challenges in reproducibility, transparency, and informed decision-making. However, unlike datasets and models -- which benefit from structured documentation frameworks like Datasheets and Model Cards -- evaluation methodologies lack systematic documentation standards. We introduce Eval Factsheets, a structured, descriptive framework for documenting AI system evaluations through a comprehensive taxonomy and questionnaire-based approach. Our framework organizes evaluation characteristics across five fundamental dimensions: Context (Who made the evaluation and when?), Scope (What does it evaluate?), Structure (With what the evaluation is built?), Method (How does it work?) and Alignment (In what ways is it reliable/valid/robust?). We implement this taxonomy as a practical questionnaire spanning five sections with mandatory and recommended documentation elements. Through case studies on multiple benchmarks, we demonstrate that Eval Factsheets effectively captures diverse evaluation paradigms -- from traditional benchmarks to LLM-as-judge methodologies -- while maintaining consistency and comparability. We hope Eval Factsheets are incorporated into both existing and newly released evaluation frameworks and lead to more transparency and reproducibility.
Similar Papers
Datasheets Aren't Enough: DataRubrics for Automated Quality Metrics and Accountability
Machine Learning (CS)
Makes computer learning data better and easier to check.
From Feedback to Checklists: Grounded Evaluation of AI-Generated Clinical Notes
Computation and Language
Helps doctors check AI notes for mistakes.
Audit Cards: Contextualizing AI Evaluations
Computers and Society
Makes AI audits clearer and more trustworthy.