LoRA-Drop: Temporal LoRA Decoding for Efficient LLM Inference
By: Hossein Rajabzadeh , Maryam Dialameh , Chul B. Park and more
Potential Business Impact:
Makes AI talk faster and use less memory.
Autoregressive large language models (LLMs) are bottlenecked by sequential decoding, where each new token typically requires executing all transformer layers. Existing dynamic-depth and layer-skipping methods reduce this cost, but often rely on auxiliary routing mechanisms or incur accuracy degradation when bypassed layers are left uncompensated. We present \textbf{LoRA-Drop}, a plug-and-play inference framework that accelerates decoding by applying a \emph{temporal compute schedule} to a fixed subset of intermediate layers: on most decoding steps, selected layers reuse the previous-token hidden state and apply a low-rank LoRA correction, while periodic \emph{refresh} steps execute the full model to prevent drift. LoRA-Drop requires no routing network, is compatible with standard KV caching, and can reduce KV-cache footprint by skipping KV updates in droppable layers during LoRA steps and refreshing periodically. Across \textbf{LLaMA2-7B}, \textbf{LLaMA3-8B}, \textbf{Qwen2.5-7B}, and \textbf{Qwen2.5-14B}, LoRA-Drop achieves up to \textbf{2.6$\times$ faster decoding} and \textbf{45--55\% KV-cache reduction} while staying within \textbf{0.5 percentage points (pp)} of baseline accuracy. Evaluations on reasoning (GSM8K, MATH, BBH), code generation (HumanEval, MBPP), and long-context/multilingual benchmarks (LongBench, XNLI, XCOPA) identify a consistent \emph{safe zone} of scheduling configurations that preserves quality while delivering substantial efficiency gains, providing a simple path toward adaptive-capacity inference in LLMs. Codes are available at https://github.com/hosseinbv/LoRA-Drop.git.
Similar Papers
Predictive-LoRA: A Proactive and Fragmentation-Aware Serverless Inference System for LLMs
Distributed, Parallel, and Cluster Computing
Makes AI models answer questions much faster.
DropLoRA: Sparse Low-Rank Adaptation for Parameter-Efficient Fine-Tuning
Computation and Language
Makes AI smarter without more training.
Cross-LoRA: A Data-Free LoRA Transfer Framework across Heterogeneous LLMs
Machine Learning (CS)
Moves AI skills between different computer brains.