Inference-Cost-Aware Dynamic Tree Construction for Efficient Inference in Large Language Models
By: Yinrong Hong, Zhiquan Tan, Kai Hu
Potential Business Impact:
Makes AI talk and write much faster.
Large Language Models (LLMs) face significant inference latency challenges stemming from their autoregressive design and large size. To address this, speculative decoding emerges as a solution, enabling the simultaneous generation and validation of multiple tokens. While recent approaches like EAGLE-2 and EAGLE-3 improve speculative decoding using dynamic tree structures, they often neglect the impact of crucial system variables such as GPU devices and batch sizes. Therefore, we introduce a new dynamic tree decoding approach called CAST that takes into account inference costs, including factors such as GPU configurations and batch sizes, to dynamically refine the tree structure. Through comprehensive experimentation across six diverse tasks and utilizing six distinct LLMs, our methodology demonstrates remarkable results, achieving speeds up to 5.2 times faster than conventional decoding methods. Moreover, it generally outperforms existing state-of-the-art techniques from 5% to 20%.
Similar Papers
Efficient Speculative Decoding for Llama at Scale: Challenges and Solutions
Computation and Language
Makes AI talk much faster.
Latency and Token-Aware Test-Time Compute
Machine Learning (CS)
Makes AI answer questions faster and better.
Efficient LLM Inference over Heterogeneous Edge Networks with Speculative Decoding
Systems and Control
Makes AI answer questions much faster.