Mixture of Lookup Experts
By: Shibo Jie , Yehui Tang , Kai Han and more
Potential Business Impact:
Makes AI models run faster and use less memory.
Mixture-of-Experts (MoE) activates only a subset of experts during inference, allowing the model to maintain low inference FLOPs and latency even as the parameter count scales up. However, since MoE dynamically selects the experts, all the experts need to be loaded into VRAM. Their large parameter size still limits deployment, and offloading, which load experts into VRAM only when needed, significantly increase inference latency. To address this, we propose Mixture of Lookup Experts (MoLE), a new MoE architecture that is efficient in both communication and VRAM usage. In MoLE, the experts are Feed-Forward Networks (FFNs) during training, taking the output of the embedding layer as input. Before inference, these experts can be re-parameterized as lookup tables (LUTs) that retrieves expert outputs based on input ids, and offloaded to storage devices. Therefore, we do not need to perform expert computations during inference. Instead, we directly retrieve the expert's computation results based on input ids and load them into VRAM, and thus the resulting communication overhead is negligible. Experiments show that, with the same FLOPs and VRAM usage, MoLE achieves inference speeds comparable to dense models and significantly faster than MoE with experts offloading, while maintaining performance on par with MoE.
Similar Papers
Mixture of Lookup Key-Value Experts
Machine Learning (CS)
Makes smart computer programs work on phones.
eMoE: Task-aware Memory Efficient Mixture-of-Experts-Based (MoE) Model Inference
Machine Learning (CS)
Makes AI smarter using less computer memory.
Accelerating Mixture-of-Expert Inference with Adaptive Expert Split Mechanism
Machine Learning (CS)
Makes AI models run faster and cheaper.