Bare-Metal RISC-V + NVDLA SoC for Efficient Deep Learning Inference
By: Vineet Kumar , Ajay Kumar M , Yike Li and more
Potential Business Impact:
Makes smart devices run AI much faster.
This paper presents a novel System-on-Chip (SoC) architecture for accelerating complex deep learning models for edge computing applications through a combination of hardware and software optimisations. The hardware architecture tightly couples the open-source NVIDIA Deep Learning Accelerator (NVDLA) to a 32-bit, 4-stage pipelined RISC-V core from Codasip called uRISC_V. To offload the model acceleration in software, our toolflow generates bare-metal application code (in assembly), overcoming complex OS overheads of previous works that have explored similar architectures. This tightly coupled architecture and bare-metal flow leads to improvements in execution speed and storage efficiency, making it suitable for edge computing solutions. We evaluate the architecture on AMD's ZCU102 FPGA board using NVDLA-small configuration and test the flow using LeNet-5, ResNet-18 and ResNet-50 models. Our results show that these models can perform inference in 4.8 ms, 16.2 ms and 1.1 s respectively, at a system clock frequency of 100 MHz.
Similar Papers
Chiplet-Based RISC-V SoC with Modular AI Acceleration
Hardware Architecture
Makes smart gadgets faster and use less power.
Chiplet-Based RISC-V SoC with Modular AI Acceleration
Hardware Architecture
Makes smart devices faster and use less power.
Flexible Vector Integration in Embedded RISC-V SoCs for End to End CNN Inference Acceleration
Distributed, Parallel, and Cluster Computing
Makes smart devices run AI faster and use less power.