Score: 1

Faster and Better LLMs via Latency-Aware Test-Time Scaling

Published: May 26, 2025 | arXiv ID: 2505.19634v4

By: Zili Wang , Tianyu Zhang , Haoli Bai and more

BigTech Affiliations: Huawei

Potential Business Impact:

Makes AI answer math problems faster and more accurately.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Test-Time Scaling (TTS) has proven effective in improving the performance of Large Language Models (LLMs) during inference. However, existing research has overlooked the efficiency of TTS from a latency-sensitive perspective. Through a latency-aware evaluation of representative TTS methods, we demonstrate that a compute-optimal TTS does not always result in the lowest latency in scenarios where latency is critical. To address this gap and achieve latency-optimal TTS, we propose two key approaches by optimizing the concurrency configurations: (1) branch-wise parallelism, which leverages multiple concurrent inference branches, and (2) sequence-wise parallelism, enabled by speculative decoding. By integrating these two approaches and allocating computational resources properly to each, our latency-optimal TTS enables a 32B model to reach 82.3% accuracy on MATH-500 within 1 minute and a smaller 3B model to achieve 72.4% within 10 seconds. Our work emphasizes the importance of latency-aware TTS and demonstrates its ability to deliver both speed and accuracy in latency-sensitive scenarios.

Country of Origin
🇨🇳 China

Page Count
14 pages

Category
Computer Science:
Computation and Language