BuddyMoE: Exploiting Expert Redundancy to Accelerate Memory-Constrained Mixture-of-Experts Inference
By: Yun Wang , Lingyun Yang , Senhao Yu and more
Potential Business Impact:
Lets AI learn more without needing more computer memory.
Mixture-of-Experts (MoE) architectures scale language models by activating only a subset of specialized expert networks for each input token, thereby reducing the number of floating-point operations. However, the growing size of modern MoE models causes their full parameter sets to exceed GPU memory capacity; for example, Mixtral-8x7B has 45 billion parameters and requires 87 GB of memory even though only 14 billion parameters are used per token. Existing systems alleviate this limitation by offloading inactive experts to CPU memory, but transferring experts across the PCIe interconnect incurs significant latency (about 10 ms). Prefetching heuristics aim to hide this latency by predicting which experts are needed, but prefetch failures introduce significant stalls and amplify inference latency. In the event of a prefetch failure, prior work offers two primary solutions: either fetch the expert on demand, which incurs a long stall due to the PCIe bottleneck, or drop the expert from the computation, which significantly degrades model accuracy. The critical challenge, therefore, is to maintain both high inference speed and model accuracy when prefetching fails.
Similar Papers
MoE-SpeQ: Speculative Quantized Decoding with Proactive Expert Prefetching and Offloading for Mixture-of-Experts
Machine Learning (CS)
Makes smart AI run faster on less powerful computers.
ExpertFlow: Adaptive Expert Scheduling and Memory Coordination for Efficient MoE Inference
Distributed, Parallel, and Cluster Computing
Makes AI models run faster on less memory.
Accelerating Mixture-of-Expert Inference with Adaptive Expert Split Mechanism
Machine Learning (CS)
Makes AI models run faster and cheaper.