Who Evaluates AI's Social Impacts? Mapping Coverage and Gaps in First and Third Party Evaluations
By: Anka Reuel , Avijit Ghosh , Jenny Chim and more
Potential Business Impact:
Helps check AI for fairness and harm.
Foundation models are increasingly central to high-stakes AI systems, and governance frameworks now depend on evaluations to assess their risks and capabilities. Although general capability evaluations are widespread, social impact assessments covering bias, fairness, privacy, environmental costs, and labor practices remain uneven across the AI ecosystem. To characterize this landscape, we conduct the first comprehensive analysis of both first-party and third-party social impact evaluation reporting across a wide range of model developers. Our study examines 186 first-party release reports and 183 post-release evaluation sources, and complements this quantitative analysis with interviews of model developers. We find a clear division of evaluation labor: first-party reporting is sparse, often superficial, and has declined over time in key areas such as environmental impact and bias, while third-party evaluators including academic researchers, nonprofits, and independent organizations provide broader and more rigorous coverage of bias, harmful content, and performance disparities. However, this complementarity has limits. Only model developers can authoritatively report on data provenance, content moderation labor, financial costs, and training infrastructure, yet interviews reveal that these disclosures are often deprioritized unless tied to product adoption or regulatory compliance. Our findings indicate that current evaluation practices leave major gaps in assessing AI's societal impacts, highlighting the urgent need for policies that promote developer transparency, strengthen independent evaluation ecosystems, and create shared infrastructure to aggregate and compare third-party evaluations in a consistent and accessible way.
Similar Papers
Fostering the Ecosystem of AI for Social Impact Requires Expanding and Strengthening Evaluation Standards
Machine Learning (CS)
Helps AI help people without needing to be perfect.
AI Where It Matters: Where, Why, and How Developers Want AI Support in Daily Work
Software Engineering
Helps AI help coders with their hardest jobs.
A Study on the Framework for Evaluating the Ethics and Trustworthiness of Generative AI
Computers and Society
Makes AI safer and more trustworthy for everyone.