MX+: Pushing the Limits of Microscaling Formats for Efficient Large Language Model Serving
By: Jungi Lee , Junyong Park , Soohyun Cha and more
Potential Business Impact:
Makes AI understand words better with less computer power.
Reduced-precision data formats are crucial for cost-effective serving of large language models (LLMs). While numerous reduced-precision formats have been introduced thus far, they often require intrusive modifications to the software frameworks or are rather unconventional for widespread adoption across hardware vendors. In this paper, we instead focus on recent industry-driven variants of block floating-point (BFP) formats and conduct a comprehensive analysis to push their limits for efficient LLM serving. Our analysis shows that existing ultra low-bit BFP variants struggle to provide reasonable language model performance due to outlier values in blocks. To address the outliers with BFPs, we propose MX+, a cost-effective and non-intrusive extension designed for seamless integration into the microscaling (MX) formats. MX+ builds on the key insight that the outlier does not need to use its exponent field in the element data type, which allows us to repurpose the exponent field as an extended mantissa to increase the precision of the outlier element. Our evaluation shows that MX+ achieves significantly higher model performance compared to the 4-bit MX format (MXFP4) with negligible storage overhead and slowdown, thus offering a compelling alternative to MXFP4 or MXFP6 for efficient LLM inference.
Similar Papers
Recipes for Pre-training LLMs with MXFP8
Machine Learning (CS)
Makes computers learn faster using less memory.
Microscaling Floating Point Formats for Large Language Models
Neural and Evolutionary Computing
Makes big computer brains use less memory.
Pushing the Limits of BFP on Narrow Precision LLM Inference
Hardware Architecture
Makes AI models run much faster and cheaper.