Score: 0

AQUATIC-Diff: Additive Quantization for Truly Tiny Compressed Diffusion Models

Published: June 6, 2025 | arXiv ID: 2506.05960v2

By: Adil Hasan, Thomas Peyrin

Potential Business Impact:

Makes AI image tools run faster on less power.

Business Areas:
Quantum Computing Science and Engineering

Significant investments have been made towards the commodification of diffusion models for generation of diverse media. Their mass-market adoption is however still hobbled by the intense hardware resource requirements of diffusion model inference. Model quantization strategies tailored specifically towards diffusion models have been useful in easing this burden, yet have generally explored the Uniform Scalar Quantization (USQ) family of quantization methods. In contrast, Vector Quantization (VQ) methods, which operate on groups of multiple related weights as the basic unit of compression, have seen substantial success in Large Language Model (LLM) quantization. In this work, we apply codebook-based additive vector quantization to the problem of diffusion model compression. Our resulting approach achieves a new Pareto frontier for the extremely low-bit weight quantization on the standard class-conditional benchmark of LDM-4 on ImageNet at 20 inference time steps. Notably, we report sFID 1.92 points lower than the full-precision model at W4A8 and the best-reported results for FID, sFID and ISC at W2A8. We are also able to demonstrate FLOPs savings on arbitrary hardware via an efficient inference kernel, as opposed to savings resulting from small integer operations which may lack broad hardware support.

Country of Origin
πŸ‡ΈπŸ‡¬ Singapore

Page Count
12 pages

Category
Computer Science:
Machine Learning (CS)