Revisiting Test-Time Scaling: A Survey and a Diversity-Aware Method for Efficient Reasoning
By: Ho-Lam Chung , Teng-Yun Hsiao , Hsiao-Ying Huang and more
Potential Business Impact:
Makes smart computer programs think better, faster.
Test-Time Scaling (TTS) improves the reasoning performance of Large Language Models (LLMs) by allocating additional compute during inference. We conduct a structured survey of TTS methods and categorize them into sampling-based, search-based, and trajectory optimization strategies. We observe that reasoning-optimized models often produce less diverse outputs, which limits TTS effectiveness. To address this, we propose ADAPT (A Diversity Aware Prefix fine-Tuning), a lightweight method that applies prefix tuning with a diversity-focused data strategy. Experiments on mathematical reasoning tasks show that ADAPT reaches 80% accuracy using eight times less compute than strong baselines. Our findings highlight the essential role of generative diversity in maximizing TTS effectiveness.
Similar Papers
Mitigating Strategy-Selection Bias in Reasoning for More Effective Test-Time Scaling
Artificial Intelligence
Makes AI think of more ways to solve problems.
The Art of Scaling Test-Time Compute for Large Language Models
Computation and Language
Makes AI think better by changing how it works.
Step-level Verifier-guided Hybrid Test-Time Scaling for Large Language Models
Computation and Language
Makes AI think better without extra training.