Mugi: Value Level Parallelism For Efficient LLMs
By: Daniel Price , Prabhu Vellaisamy , John Shen and more
Potential Business Impact:
Makes AI smarter, faster, and use less energy.
Value level parallelism (VLP) has been proposed to improve the efficiency of large-batch, low-precision general matrix multiply (GEMM) between symmetric activations and weights. In transformer based large language models (LLMs), there exist more sophisticated operations beyond activation-weight GEMM. In this paper, we explore how VLP benefits LLMs. First, we generalize VLP for nonlinear approximations, outperforming existing nonlinear approximations in end-to-end LLM accuracy, performance, and efficiency. Our VLP approximation follows a value-centric approach, where important values are assigned with greater accuracy. Second, we optimize VLP for small-batch GEMMs with asymmetric inputs efficiently, which leverages timely LLM optimizations, including weight-only quantization, key-value (KV) cache quantization, and group query attention. Finally, we design a new VLP architecture, Mugi, to encapsulate the innovations above and support full LLM workloads, while providing better performance, efficiency and sustainability. Our experimental results show that Mugi can offer significant improvements on throughput and energy efficiency, up to $45\times$ and $668\times$ for nonlinear softmax operations, and $2.07\times$ and $3.11\times$ for LLMs, and also decrease operational carbon for LLM operation by $1.45\times$ and embodied carbon by $1.48\times$.
Similar Papers
MHA2MLA-VLM: Enabling DeepSeek's Economical Multi-Head Latent Attention across Vision-Language Models
CV and Pattern Recognition
Makes AI models faster and use less memory.
HybridToken-VLM: Hybrid Token Compression for Vision-Language Models
CV and Pattern Recognition
Lets computers understand pictures better, faster.
From Brute Force to Semantic Insight: Performance-Guided Data Transformation Design with LLMs
CV and Pattern Recognition
Helps computers write better code automatically.