Hardware Software Optimizations for Fast Model Recovery on Reconfigurable Architectures
By: Bin Xu, Ayan Banerjee, Sandeep Gupta
Potential Business Impact:
Makes robots move and learn much faster.
Model Recovery (MR) is a core primitive for physical AI and real-time digital twins, but GPUs often execute MR inefficiently due to iterative dependencies, kernel-launch overheads, underutilized memory bandwidth, and high data-movement latency. We present MERINDA, an FPGA-accelerated MR framework that restructures computation as a streaming dataflow pipeline. MERINDA exploits on-chip locality through BRAM tiling, fixed-point kernels, and the concurrent use of LUT fabric and carry-chain adders to expose fine-grained spatial parallelism while minimizing off-chip traffic. This hardware-aware formulation removes synchronization bottlenecks and sustains high throughput across the iterative updates in MR. On representative MR workloads, MERINDA delivers up to 6.3x fewer cycles than an FPGA-based LTC baseline, enabling real-time performance for time-critical physical systems.
Similar Papers
Model Recovery at the Edge under Resource Constraints for Physical AI
Artificial Intelligence
Makes robots learn and act faster, using less power.
MIREDO: MIP-Driven Resource-Efficient Dataflow Optimization for Computing-in-Memory Accelerator
Hardware Architecture
Makes AI run much faster on special chips.
Memory-Integrated Reconfigurable Adapters: A Unified Framework for Settings with Multiple Tasks
Machine Learning (CS)
AI learns new things without forgetting old ones.