CKA-Guided Modular Quantization: Beyond Bit-Width to Algorithmic Diversity
By: Jinhao Zhang, Yunquan Zhang, Daning Chen
Current mainstream post-training quantization methods for large language models typically apply a uniform quantization strategy across all network layers, overlooking the substantial differences in algorithmic suitability among layers. To address this limitation, we propose CKA Guided Modular Quantization, a fine-tuning-free, plug-and-play framework for algorithmic heterogeneous quantization. Our method independently evaluates multiple PTQ algorithms on each layer and employs Linear Centered Kernel Alignment (CKA) as a metric to automatically select the optimal quantization strategy per layer. The individually optimized strategies are then integrated to construct a hybrid quantized model. Experiments demonstrate that our approach consistently outperforms both uniform quantization baselines and state-of-the-art mixed-precision methods across mainstream LLMs including LLaMA and Qwen ,in terms of perplexity (PPL) and downstream task performance.
Similar Papers
ZeroQAT: Your Quantization-aware Training but Efficient
Machine Learning (CS)
Makes smart computer programs run faster and smaller.
Mixed-Precision Quantization for Language Models: Techniques and Prospects
Machine Learning (CS)
Makes smart computer programs smaller and faster.
FlexQuant: A Flexible and Efficient Dynamic Precision Switching Framework for LLM Quantization
Machine Learning (CS)
Makes AI models faster without losing smartness.