Understanding the Role of Training Data in Test-Time Scaling
By: Adel Javanmard, Baharan Mirzasoleiman, Vahab Mirrokni
Potential Business Impact:
Helps AI solve harder problems by thinking more.
Test-time scaling improves the reasoning capabilities of large language models (LLMs) by allocating extra compute to generate longer Chains-of-Thoughts (CoTs). This enables models to tackle more complex problem by breaking them down into additional steps, backtracking, and correcting mistakes. Despite its strong performance--demonstrated by OpenAI's o1 and DeepSeek R1, the conditions in the training data under which long CoTs emerge, and when such long CoTs improve the performance, remain unclear. In this paper, we study the performance of test-time scaling for transformers trained on an in-context weight prediction task for linear regression. Our analysis provides a theoretical explanation for several intriguing observations: First, at any fixed test error, increasing test-time compute allows us to reduce the number of in-context examples (context length) in training prompts. Second, if the skills required to solve a downstream task are not sufficiently present in the training data, increasing test-time compute can harm performance. Finally, we characterize task hardness via the smallest eigenvalue of its feature covariance matrix and show that training on a diverse, relevant, and hard set of tasks results in best performance for test-time scaling. We confirm our findings with experiments on large, nonlinear transformer architectures.
Similar Papers
s1: Simple test-time scaling
Computation and Language
Makes AI think longer to solve hard math problems.
Crosslingual Reasoning through Test-Time Scaling
Computation and Language
Computers can solve math problems in many languages.
m1: Unleash the Potential of Test-Time Scaling for Medical Reasoning with Large Language Models
Computation and Language
Improves AI's medical knowledge and answers.