Arbitrage: Efficient Reasoning via Advantage-Aware Speculation
By: Monishwaran Maheswaran , Rishabh Tiwari , Yuezhou Hu and more
Potential Business Impact:
Makes AI think faster and smarter.
Modern Large Language Models achieve impressive reasoning capabilities with long Chain of Thoughts, but they incur substantial computational cost during inference, and this motivates techniques to improve the performance-cost ratio. Among these techniques, Speculative Decoding accelerates inference by employing a fast but inaccurate draft model to autoregressively propose tokens, which are then verified in parallel by a more capable target model. However, due to unnecessary rejections caused by token mismatches in semantically equivalent steps, traditional token-level Speculative Decoding struggles in reasoning tasks. Although recent works have shifted to step-level semantic verification, which improve efficiency by accepting or rejecting entire reasoning steps, existing step-level methods still regenerate many rejected steps with little improvement, wasting valuable target compute. To address this challenge, we propose Arbitrage, a novel step-level speculative generation framework that routes generation dynamically based on the relative advantage between draft and target models. Instead of applying a fixed acceptance threshold, Arbitrage uses a lightweight router trained to predict when the target model is likely to produce a meaningfully better step. This routing approximates an ideal Arbitrage Oracle that always chooses the higher-quality step, achieving near-optimal efficiency-accuracy trade-offs. Across multiple mathematical reasoning benchmarks, Arbitrage consistently surpasses prior step-level Speculative Decoding baselines, reducing inference latency by up to $\sim2\times$ at matched accuracy.
Similar Papers
Confidence-Modulated Speculative Decoding for Large Language Models
Computation and Language
Makes AI write faster and smarter.
Accelerating Large-Scale Reasoning Model Inference with Sparse Self-Speculative Decoding
Machine Learning (CS)
Makes AI answer questions much faster.
Speculative Decoding in Decentralized LLM Inference: Turning Communication Latency into Computation Throughput
Distributed, Parallel, and Cluster Computing
Makes AI talk faster when shared.