Score: 0

Instruction-Based Coordination of Heterogeneous Processing Units for Acceleration of DNN Inference

Published: November 19, 2025 | arXiv ID: 2511.15505v1

By: Anastasios Petropoulos, Theodore Antonakopoulos

Potential Business Impact:

Speeds up AI by making computer chips work together.

Business Areas:
Field-Programmable Gate Array (FPGA) Hardware

This paper presents an instruction-based coordination architecture for Field-Programmable Gate Array (FPGA)-based systems with multiple high-performance Processing Units (PUs) for accelerating Deep Neural Network (DNN) inference. This architecture enables programmable multi-PU synchronization through instruction controller units coupled with peer-to-peer instruction synchronization units, utilizing instruction types organized into load, compute, and store functional groups. A compilation framework is presented that transforms DNN models into executable instruction programs, enabling flexible partitioning of DNN models into topologically contiguous subgraphs mapped to available PUs. Multiple deployment strategies are supported, enabling pipeline parallelism among PUs and batch-level parallelism across different PU subsets, with runtime switching among them without FPGA reconfiguration. The proposed approach enables design space exploration, supporting dynamic trade-offs between single-batch and multi-batch performance. Experimental results on ResNet-50 demonstrate notable compute efficiency, up to $98\%$, and throughput efficiency gains, up to $2.7\times$, over prior works across different configurations.

Country of Origin
🇬🇷 Greece

Page Count
9 pages

Category
Computer Science:
Hardware Architecture