Instruction-Based Coordination of Heterogeneous Processing Units for Acceleration of DNN Inference
By: Anastasios Petropoulos, Theodore Antonakopoulos
Potential Business Impact:
Speeds up AI by making computer chips work together.
This paper presents an instruction-based coordination architecture for Field-Programmable Gate Array (FPGA)-based systems with multiple high-performance Processing Units (PUs) for accelerating Deep Neural Network (DNN) inference. This architecture enables programmable multi-PU synchronization through instruction controller units coupled with peer-to-peer instruction synchronization units, utilizing instruction types organized into load, compute, and store functional groups. A compilation framework is presented that transforms DNN models into executable instruction programs, enabling flexible partitioning of DNN models into topologically contiguous subgraphs mapped to available PUs. Multiple deployment strategies are supported, enabling pipeline parallelism among PUs and batch-level parallelism across different PU subsets, with runtime switching among them without FPGA reconfiguration. The proposed approach enables design space exploration, supporting dynamic trade-offs between single-batch and multi-batch performance. Experimental results on ResNet-50 demonstrate notable compute efficiency, up to $98\%$, and throughput efficiency gains, up to $2.7\times$, over prior works across different configurations.
Similar Papers
A Scalable FPGA Architecture With Adaptive Memory Utilization for GEMM-Based Operations
Hardware Architecture
Makes AI learn faster and use less power.
Hardware-Aware Data and Instruction Mapping for AI Tasks: Balancing Parallelism, I/O and Memory Tradeoffs
Hardware Architecture
Makes AI run faster using less power.
Flexible Vector Integration in Embedded RISC-V SoCs for End to End CNN Inference Acceleration
Distributed, Parallel, and Cluster Computing
Makes smart devices run AI faster and use less power.