Model-Distributed Inference for Large Language Models at the Edge
By: Davide Macario, Hulya Seferoglu, Erdem Koyuncu
Potential Business Impact:
Lets phones run smart AI without needing a supercomputer.
We introduce Model-Distributed Inference for Large-Language Models (MDI-LLM), a novel framework designed to facilitate the deployment of state-of-the-art large-language models (LLMs) across low-power devices at the edge. This is accomplished by dividing the model into multiple partitions, which are then assigned to different devices/nodes within the network. These nodes exchange intermediate activation vectors via device-to-device links, enabling collaborative computation. To enhance the efficiency of this process, we propose the "recurrent pipeline parallelism" technique, which reduces idle time on each device and facilitates parallel inference during the generation of multiple text sequences. By leveraging the combined computational resources of multiple edge devices, MDI-LLM enables the deployment of LLMs that exceed the memory capacity of individual devices, making it possible to perform inference on low-cost hardware. Furthermore, as the number of participating devices increases, MDI-LLM boosts token generation throughput and reduces memory consumption per device.
Similar Papers
MNN-LLM: A Generic Inference Engine for Fast Large Language Model Deployment on Mobile Devices
Machine Learning (CS)
Makes big AI models run fast on phones.
Distributed On-Device LLM Inference With Over-the-Air Computation
Distributed, Parallel, and Cluster Computing
Lets phones run smart AI without internet.
Large Language Model Partitioning for Low-Latency Inference at the Edge
Distributed, Parallel, and Cluster Computing
Makes AI write faster on small computers.