On the Limits of Test-Time Compute: Sequential Reward Filtering for Better Inference
By: Yue Yu , Qiwei Di , Quanquan Gu and more
Potential Business Impact:
Makes AI smarter by picking the best answers.
Test-time compute (TTC) has become an increasingly prominent paradigm for enhancing large language models (LLMs). Despite the empirical success of methods such as best-of-$n$ (BoN) sampling and sequential revision, their fundamental limits remain unclear. We address this gap by analyzing a mixture-of-reference policy model and proving that standard BoN is inherently suboptimal. To move closer to the optimal frontier, we study reward-filtered sequential inference, a simple procedure that selectively incorporates only high-reward generations into the context. This mechanism concentrates computation on superior policy candidates and suppresses inferior ones. On the theoretical side, we show that reward-filtered sequential inference yields strictly stronger guarantees than standard TTC paradigms. On the empirical side, we evaluate such an inference strategy across diverse benchmarks and observe consistent improvements over widely used approaches, demonstrating the practical effectiveness of our framework.
Similar Papers
RTTC: Reward-Guided Collaborative Test-Time Compute
Computation and Language
Smartly chooses AI tricks to answer questions better.
Towards Reasoning for PDE Foundation Models: A Reward-Model-Driven Inference-Time-Scaling Algorithm
Machine Learning (CS)
Makes computer simulations of science problems smarter.
Towards Reasoning for PDE Foundation Models: A Reward-Model-Driven Inference-Time-Scaling Algorithm
Machine Learning (CS)
Makes computer simulations of science problems smarter.