LIME:Accelerating Collaborative Lossless LLM Inference on Memory-Constrained Edge Devices
By: Mingyu Sun , Xiao Zhang , Shen Qu and more
Large language models (LLMs) have emerged as a powerful foundation for intelligent reasoning and decision-making, demonstrating substantial impact across a wide range of domains and applications. However, their massive parameter scales and substantial resource demands pose critical challenges for efficient inference on edge devices. These devices are inherently constrained by limited computational power and memory capacity, while bandwidth bottlenecks at the network edge further restrict distributed deployment and real-time responsiveness. Although existing research has explored lightweight optimization techniques to mitigate memory limitations, such approaches often incur significant degradation in model accuracy and performance. To address these challenges, we propose LIME, a collaborative system that enables lossless inference for large models across multiple memory-constrained edge devices under limited network bandwidth. LIME employs an interleaved pipeline parallelism in conjunction with model offloading to dynamically balance computation and communication. Furthermore, a fine-grained offline allocation scheduler and online memory adaptation strategy are introduced to enhance the device's computing and storage resources while minimizing inference latency. Extensive experiments demonstrate that LIME, deployed on four heterogeneous Nvidia Jetson edge devices for LLaMA3.3-70B-Instruct model inference, achieves 1.7$\times$ and 3.7$\times$ speedups over state-of-the-art baselines under sporadic and bursty request patterns respectively, without compromising model accuracy.
Similar Papers
Model-Distributed Inference for Large Language Models at the Edge
Machine Learning (CS)
Lets phones run smart AI without needing a supercomputer.
Camel: Energy-Aware LLM Inference on Resource-Constrained Devices
Networking and Internet Architecture
Makes smart computer programs run faster, using less power.
MNN-LLM: A Generic Inference Engine for Fast Large Language Model Deployment on Mobile Devices
Machine Learning (CS)
Makes big AI models run fast on phones.