Score: 0

Mugi: Value Level Parallelism For Efficient LLMs

Published: January 15, 2026 | arXiv ID: 2601.10823v1

By: Daniel Price , Prabhu Vellaisamy , John Shen and more

Potential Business Impact:

Makes AI smarter, faster, and use less energy.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Value level parallelism (VLP) has been proposed to improve the efficiency of large-batch, low-precision general matrix multiply (GEMM) between symmetric activations and weights. In transformer based large language models (LLMs), there exist more sophisticated operations beyond activation-weight GEMM. In this paper, we explore how VLP benefits LLMs. First, we generalize VLP for nonlinear approximations, outperforming existing nonlinear approximations in end-to-end LLM accuracy, performance, and efficiency. Our VLP approximation follows a value-centric approach, where important values are assigned with greater accuracy. Second, we optimize VLP for small-batch GEMMs with asymmetric inputs efficiently, which leverages timely LLM optimizations, including weight-only quantization, key-value (KV) cache quantization, and group query attention. Finally, we design a new VLP architecture, Mugi, to encapsulate the innovations above and support full LLM workloads, while providing better performance, efficiency and sustainability. Our experimental results show that Mugi can offer significant improvements on throughput and energy efficiency, up to $45\times$ and $668\times$ for nonlinear softmax operations, and $2.07\times$ and $3.11\times$ for LLMs, and also decrease operational carbon for LLM operation by $1.45\times$ and embodied carbon by $1.48\times$.

Country of Origin
πŸ‡ΊπŸ‡Έ United States

Page Count
22 pages

Category
Computer Science:
Machine Learning (CS)