The Art of Scaling Test-Time Compute for Large Language Models
By: Aradhye Agarwal, Ayan Sengupta, Tanmoy Chakraborty
Potential Business Impact:
Makes AI think better by changing how it works.
Test-time scaling (TTS) -- the dynamic allocation of compute during inference -- is a promising direction for improving reasoning in large language models (LLMs). However, a systematic comparison of well-known TTS strategies under identical conditions is missing, and the influence of model type and problem difficulty on performance remains unclear. To address these gaps, we conduct the first large-scale study of TTS, spanning over thirty billion tokens generated using eight open-source LLMs (7B to 235B parameters), across four reasoning datasets. We observe three consistent trends: (1) no single TTS strategy universally dominates; (2) reasoning models exhibit distinct trace-quality patterns across problem difficulty and trace length, forming short-horizon and long-horizon categories; and (3) for a given model type, the optimal TTS performance scales monotonically with compute budget. Based on these insights, we provide a practical recipe for selecting the best TTS strategy, considering problem difficulty, model type, and compute budget, providing a practical guide to effective inference-time scaling.
Similar Papers
AgentTTS: Large Language Model Agent for Test-time Compute-optimal Scaling Strategy in Complex Tasks
Artificial Intelligence
Boosts AI for multi-step complex tasks
Revisiting Test-Time Scaling: A Survey and a Diversity-Aware Method for Efficient Reasoning
Computation and Language
Makes smart computer programs think better, faster.
Are We Scaling the Right Thing? A System Perspective on Test-Time Scaling
Performance
Makes AI answer questions faster and cheaper.