Dynamic Rebatching for Efficient Early-Exit Inference with DREX
By: Xuting Liu , Daniel Alexander , Siva Kesava Reddy Kakarla and more
Potential Business Impact:
Makes AI answer questions much faster.
Early-Exit (EE) is a Large Language Model (LLM) architecture that accelerates inference by allowing easier tokens to be generated using only a subset of the model's layers. However, traditional batching frameworks are ill-suited for EE LLMs, as not all requests in a batch may be ready to exit at the same time. Existing solutions either force a uniform decision on the batch, which overlooks EE opportunities, or degrade output quality by forcing premature exits. We propose Dynamic Rebatching, a solution where we dynamically reorganize the batch at each early-exit point. Requests that meet the exit criteria are immediately processed, while those that continue are held in a buffer, re-grouped into a new batch, and forwarded to deeper layers. We introduce DREX, an early-exit inference system that implements Dynamic Rebatching with two key optimizations: 1) a copy-free rebatching buffer that avoids physical data movement, and 2) an EE and SLA-aware scheduler that analytically predicts whether a given rebatching operation will be profitable. DREX also efficiently handles the missing KV cache from skipped layers using memory-efficient state-copying. Our evaluation shows that DREX improves throughput by 2-12% compared to baseline approaches while maintaining output quality. Crucially, DREX completely eliminates involuntary exits, providing a key guarantee for preserving the output quality intended by the EE model.
Similar Papers
Accelerating Large Language Model Inference via Early-Exiting Algorithms
Computation and Language
Makes smart computer programs run faster and cheaper.
LYNX: Learning Dynamic Exits for Confidence-Controlled Reasoning
Computation and Language
Lets AI stop thinking early, saving time and energy.
HELIOS: Adaptive Model And Early-Exit Selection for Efficient LLM Inference Serving
Computation and Language
Makes AI answer faster and use less power.