Dataflow & Tiling Strategies in Edge-AI FPGA Accelerators: A Comprehensive Literature Review
By: Richie Li
Potential Business Impact:
Makes smart devices work faster with less power.
Edge-AI applications demand high-throughput, low-latency inference on FPGAs under tight resource and power constraints. This survey provides a comprehensive review of two key architectural decisions for FPGA-based neural network accelerators: (i) the dataflow (the order and manner in which data is moved and reused on chip), and (ii) the tiling/blocking strategy (how large tensors are partitioned to fit on-chip). We first present a broadened taxonomy of canonical dataflow styles: Weight-Stationary, Output-Stationary, Row-Stationary, and No-Local-Reuse, including formal definitions, pseudocode/diagrams, and real FPGA examples. We then discuss analytical frameworks (MAESTRO, Timeloop) and compare them with a concise feature table, illustrating how they model reuse, performance, and hardware costs, and include a case study of a 3x3 convolution layer to demonstrate typical tool outputs. Next, we detail multi-level tiling and loop unrolling/pipelining strategies for FPGAs, clarifying how each memory tier (registers, LUTRAM, BRAM, HBM) can be exploited. Our four case studies - FINN, FINN-R, FlightLLM, and SSR - highlight distinct dataflows (from binary streaming to hybrid sparse transformations) and tiling patterns. We include a unified comparison matrix covering platform, precision, throughput, resource utilization, and energy efficiency, plus small block diagrams for each design. We conclude by examining design automation trade-offs among HLS, DSL, and hand-coded RTL, offering a "lessons learned" summary box, and charting future research directions in partial reconfiguration, hybrid dataflows, and domain-specific compiler flows for next-generation edge AI FPGA accelerators.
Similar Papers
TileLang: A Composable Tiled Programming Model for AI Systems
Machine Learning (CS)
Makes AI programs run much faster and easier.
Real Time FPGA Based CNNs for Detection, Classification, and Tracking in Autonomous Systems: State of the Art Designs and Optimizations
Hardware Architecture
Makes cameras understand things faster and with less power.
Design and Implementation of an FPGA-Based Hardware Accelerator for Transformer
Hardware Architecture
Makes AI models run much faster and cheaper.