Score: 2

SIRA: Scaled-Integer Range Analysis for Optimizing FPGA Dataflow Neural Network Accelerators

Published: August 29, 2025 | arXiv ID: 2508.21493v1

By: Yaman Umuroglu , Christoph Berganski , Felix Jentzsch and more

BigTech Affiliations: AMD

Potential Business Impact:

Makes smart computer chips smaller and faster.

Business Areas:
Field-Programmable Gate Array (FPGA) Hardware

While neural network quantization effectively reduces the cost of matrix multiplications, aggressive quantization can expose non-matrix-multiply operations as significant performance and resource bottlenecks on embedded systems. Addressing such bottlenecks requires a comprehensive approach to tailoring the precision across operations in the inference computation. To this end, we introduce scaled-integer range analysis (SIRA), a static analysis technique employing interval arithmetic to determine the range, scale, and bias for tensors in quantized neural networks. We show how this information can be exploited to reduce the resource footprint of FPGA dataflow neural network accelerators via tailored bitwidth adaptation for accumulators and downstream operations, aggregation of scales and biases, and conversion of consecutive elementwise operations to thresholding operations. We integrate SIRA-driven optimizations into the open-source FINN framework, then evaluate their effectiveness across a range of quantized neural network workloads and compare implementation alternatives for non-matrix-multiply operations. We demonstrate an average reduction of 17% for LUTs, 66% for DSPs, and 22% for accumulator bitwidths with SIRA optimizations, providing detailed benchmark analysis and analytical models to guide the implementation style for non-matrix layers. Finally, we open-source SIRA to facilitate community exploration of its benefits across various applications and hardware platforms.

Country of Origin
πŸ‡³πŸ‡΄ πŸ‡΅πŸ‡± πŸ‡ΊπŸ‡Έ Poland, Norway, United States

Page Count
33 pages

Category
Computer Science:
Hardware Architecture