Score: 2

PARQ: Piecewise-Affine Regularized Quantization

Published: March 19, 2025 | arXiv ID: 2503.15748v1

By: Lisa Jin , Jianhao Ma , Zechun Liu and more

BigTech Affiliations: Meta

Potential Business Impact:

Makes computer models smaller and faster.

Business Areas:
Quantum Computing Science and Engineering

We develop a principled method for quantization-aware training (QAT) of large-scale machine learning models. Specifically, we show that convex, piecewise-affine regularization (PAR) can effectively induce the model parameters to cluster towards discrete values. We minimize PAR-regularized loss functions using an aggregate proximal stochastic gradient method (AProx) and prove that it has last-iterate convergence. Our approach provides an interpretation of the straight-through estimator (STE), a widely used heuristic for QAT, as the asymptotic form of PARQ. We conduct experiments to demonstrate that PARQ obtains competitive performance on convolution- and transformer-based vision tasks.

Country of Origin
πŸ‡ΊπŸ‡Έ United States

Repos / Data Links

Page Count
25 pages

Category
Computer Science:
Machine Learning (CS)