Benchmarking Post-Training Quantization of Large Language Models under Microscaling Floating Point Formats
By: Manyi Zhang , Ji-Fu Li , Zhongao Sun and more
Microscaling Floating-Point (MXFP) has emerged as a promising low-precision format for large language models (LLMs). Despite various post-training quantization (PTQ) algorithms being proposed, they mostly focus on integer quantization, while their applicability and behavior under MXFP formats remain largely unexplored. To address this gap, this work conducts a systematic investigation of PTQ under MXFP formats, encompassing over 7 PTQ algorithms, 15 evaluation benchmarks, and 3 LLM families. The key findings include: 1) MXFP8 consistently achieves near-lossless performance, while MXFP4 introduces substantial accuracy degradation and remains challenging; 2) PTQ effectiveness under MXFP depends strongly on format compatibility, with some algorithmic paradigms being consistently more effective than others; 3) PTQ performance exhibits highly consistent trends across model families and modalities, in particular, quantization sensitivity is dominated by the language model rather than the vision encoder in multimodal LLMs; 4) The scaling factor of quantization is a critical error source in MXFP4, and a simple pre-scale optimization strategy can significantly mitigate its impact. Together, these results provide practical guidance on adapting existing PTQ methods to MXFP quantization.
Similar Papers
Block Rotation is All You Need for MXFP4 Quantization
Machine Learning (CS)
Makes big computer brains smaller and faster.
A Comprehensive Evaluation on Quantization Techniques for Large Language Models
Machine Learning (CS)
Makes AI models smaller and faster.
MX+: Pushing the Limits of Microscaling Formats for Efficient Large Language Model Serving
Machine Learning (CS)
Makes AI understand words better with less computer power.