Score: 0

Rescaling-Aware Training for Efficient Deployment of Deep Learning Models on Full-Integer Hardware

Published: October 13, 2025 | arXiv ID: 2510.11484v1

By: Lion Mueller , Alberto Garcia-Ortiz , Ardalan Najafi and more

Potential Business Impact:

Makes AI on small devices run faster, cheaper.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Integer AI inference significantly reduces computational complexity in embedded systems. Quantization-aware training (QAT) helps mitigate accuracy degradation associated with post-training quantization but still overlooks the impact of integer rescaling during inference, which is a hardware costly operation in integer-only AI inference. This work shows that rescaling cost can be dramatically reduced post-training, by applying a stronger quantization to the rescale multiplicands at no model-quality loss. Furthermore, we introduce Rescale-Aware Training, a fine tuning method for ultra-low bit-width rescaling multiplicands. Experiments show that even with 8x reduced rescaler widths, the full accuracy is preserved through minimal incremental retraining. This enables more energy-efficient and cost-efficient AI inference for resource-constrained embedded systems.

Page Count
4 pages

Category
Computer Science:
Machine Learning (CS)