Latency and Token-Aware Test-Time Compute
By: Jenny Y. Huang , Mehul Damani , Yousef El-Kurdi and more
Potential Business Impact:
Makes AI answer questions faster and better.
Inference-time scaling has emerged as a powerful way to improve large language model (LLM) performance by generating multiple candidate responses and selecting among them. However, existing work on dynamic allocation for test-time compute typically considers only parallel generation methods such as best-of-N, overlooking incremental decoding methods like beam search, and has largely ignored latency, focusing only on token usage. We formulate inference-time scaling as a problem of dynamic compute allocation and method selection, where the system must decide which strategy to apply and how much compute to allocate on a per-query basis. Our framework explicitly incorporates both token cost and wall-clock latency, the latter being critical for user experience and particularly for agentic workflows where models must issue multiple queries efficiently. Experiments on reasoning benchmarks show that our approach consistently outperforms static strategies, achieving favorable accuracy-cost trade-offs while remaining practical for deployment.
Similar Papers
The Art of Scaling Test-Time Compute for Large Language Models
Computation and Language
Makes AI think better by changing how it works.
Are We Scaling the Right Thing? A System Perspective on Test-Time Scaling
Performance
Makes AI answer questions faster and cheaper.
Strategic Scaling of Test-Time Compute: A Bandit Learning Approach
Artificial Intelligence
Smartly uses computer power for harder questions.