Score: 2

Quantization Meets OOD: Generalizable Quantization-aware Training from a Flatness Perspective

Published: August 31, 2025 | arXiv ID: 2509.00859v1

By: Jiacheng Jiang , Yuan Meng , Chen Tang and more

Potential Business Impact:

Makes AI better at understanding new, unseen things.

Business Areas:
A/B Testing Data and Analytics

Current quantization-aware training (QAT) methods primarily focus on enhancing the performance of quantized models on in-distribution (I.D) data, while overlooking the potential performance degradation on out-of-distribution (OOD) data. In this paper, we first substantiate this problem through rigorous experiment, showing that QAT can lead to a significant OOD generalization performance degradation. Further, we find the contradiction between the perspective that flatness of loss landscape gives rise to superior OOD generalization and the phenomenon that QAT lead to a sharp loss landscape, can cause the above problem. Therefore, we propose a flatness-oriented QAT method, FQAT, to achieve generalizable QAT. Specifically, i) FQAT introduces a layer-wise freezing mechanism to mitigate the gradient conflict issue between dual optimization objectives (i.e., vanilla QAT and flatness). ii) FQAT proposes an disorder-guided adaptive freezing algorithm to dynamically determines which layers to freeze at each training step, effectively addressing the challenges caused by interference between layers. A gradient disorder metric is designed to help the algorithm identify unstable layers during training. Extensive experiments on influential OOD benchmark demonstrate the superiority of our method over state-of-the-art baselines under both I.D and OOD image classification tasks.

Country of Origin
πŸ‡¨πŸ‡³ China

Repos / Data Links

Page Count
15 pages

Category
Computer Science:
CV and Pattern Recognition