A Hybrid Early-Exit Algorithm for Large Language Models Based on Space Alignment Decoding (SPADE)
By: Bowen Zheng , Ming Ma , Zhongqiao Lin and more
Potential Business Impact:
Makes smart computer programs faster and cheaper.
Large language models are computationally expensive due to their deep structures. Prior research has shown that intermediate layers contain sufficient information to generate accurate answers, leading to the development of early-exit algorithms that reduce inference costs by terminating computation at earlier layers. However, these methods often suffer from poor performance due to misalignment between intermediate and output layer representations that lead to decoding inaccuracy. To address these challenges, we propose SPADE (SPace Alignment DEcoding), a novel decoding method that aligns intermediate layer representations with the output layer by propagating a minimally reduced sequence consisting of only the start token and the answer token. We further optimize the early-exit decision-making process by training a linear approximation of SPADE that computes entropy-based confidence metrics. Putting them together, we create a hybrid early-exit algorithm that monitors confidence levels and stops inference at intermediate layers while using SPADE to generate high-quality outputs. This approach significantly reduces inference costs without compromising accuracy, offering a scalable and efficient solution for deploying large language models in real-world applications.
Similar Papers
SPADE: Structured Pruning and Adaptive Distillation for Efficient LLM-TTS
Audio and Speech Processing
Makes AI voices sound better and faster.
SPADE: Structured Pruning and Adaptive Distillation for Efficient LLM-TTS
Audio and Speech Processing
Makes AI voices sound better and faster.
Accelerating Large Language Model Inference via Early-Exiting Algorithms
Computation and Language
Makes smart computer programs run faster and cheaper.