Score: 1

Efficient Prediction of Pass@k Scaling in Large Language Models

Published: October 6, 2025 | arXiv ID: 2510.05197v1

By: Joshua Kazdan , Rylan Schaeffer , Youssef Allouah and more

BigTech Affiliations: Stanford University

Potential Business Impact:

Predicts AI's rare risks and skills better, cheaper.

Business Areas:
Predictive Analytics Artificial Intelligence, Data and Analytics, Software

Assessing the capabilities and risks of frontier AI systems is a critical area of research, and recent work has shown that repeated sampling from models can dramatically increase both. For instance, repeated sampling has been shown to increase their capabilities, such as solving difficult math and coding problems, but it has also been shown to increase their potential for harm, such as being jailbroken. Such results raise a crucial question for both capability and safety forecasting: how can one accurately predict a model's behavior when scaled to a massive number of attempts, given a vastly smaller sampling budget? This question is directly relevant to model providers, who serve hundreds of millions of users daily, and to governmental regulators, who seek to prevent harms. To answer this questions, we make three contributions. First, we find that standard methods for fitting these laws suffer from statistical shortcomings that hinder predictive accuracy, especially in data-limited scenarios. Second, we remedy these shortcomings by introducing a robust estimation framework, which uses a beta-binomial distribution to generate more accurate predictions from limited data. Third, we propose a dynamic sampling strategy that allocates a greater budget to harder problems. Combined, these innovations enable more reliable prediction of rare risks and capabilities at a fraction of the computational cost.

Country of Origin
🇺🇸 United States

Page Count
24 pages

Category
Computer Science:
Artificial Intelligence