Serving Heterogeneous LoRA Adapters in Distributed LLM Inference Systems
By: Shashwat Jaiswal , Shrikara Arun , Anjaly Parayil and more
Potential Business Impact:
Makes AI models run faster using fewer computers.
Low-Rank Adaptation (LoRA) has become the de facto method for parameter-efficient fine-tuning of large language models (LLMs), enabling rapid adaptation to diverse domains. In production, LoRA-based models are served at scale, creating multi-tenant environments with hundreds of adapters sharing a base model. However, state-of-the-art serving systems co-batch heterogeneous adapters without accounting for rank (size) variability, leading to severe performance skew, which ultimately requires adding more GPUs to satisfy service-level objectives (SLOs). Existing optimizations, focused on loading, caching, and kernel execution, ignore this heterogeneity, leaving GPU resources underutilized. We present LoRAServe, a workload-aware dynamic adapter placement and routing framework designed to tame rank diversity in LoRA serving. By dynamically rebalancing adapters across GPUs and leveraging GPU Direct RDMA for remote access, LoRAServe maximizes throughput and minimizes tail latency under real-world workload drift. Evaluations on production traces from Company X show that LoRAServe elicits up to 2$\times$ higher throughput, up to 9$\times$ lower TTFT, while using up to 50% fewer GPUs under SLO constraints compared to state-of-the-art systems.
Similar Papers
PLoRA: Efficient LoRA Hyperparameter Tuning for Large Models
Machine Learning (CS)
Makes AI learn new things much faster.
LoRA on the Go: Instance-level Dynamic LoRA Selection and Merging
Computation and Language
Lets AI switch jobs instantly without retraining.
LoRAverse: A Submodular Framework to Retrieve Diverse Adapters for Diffusion Models
CV and Pattern Recognition
Finds best AI art styles from many options.