Energy-Efficient FPGA Framework for Non-Quantized Convolutional Neural Networks
By: Angelos Athanasiadis, Nikolaos Tampouratzis, Ioannis Papaefstathiou
Potential Business Impact:
Makes AI faster and smarter on small devices.
The growing demand for real-time processing in artificial intelligence applications, particularly those involving Convolutional Neural Networks (CNNs), has highlighted the need for efficient computational solutions. Conventional processors, very often, fall short in balancing performance, power consumption, and latency, especially in embedded systems and edge computing platforms. Field-Programmable Gate Arrays (FPGAs) offer a promising alternative, combining high performance with energy efficiency and reconfigurability. The presented framework addresses the complex and demanding computations of CNNs on FPGAs maintaining full precision in all neural network parameters. Specifically, our framework is based on Darknet which is very widely used for the design of CNNs and allows the designer, by using a similar input to that given to Darknet, to efficiently implement a CNN in a heterogeneous system comprising of CPUs and FPGAs. When compared with the FPGA frameworks that support quantization, our solution aims to offer similar performance and/or energy efficiency without any degradation on the NN accuracy.
Similar Papers
A Resource-Driven Approach for Implementing CNNs on FPGAs Using Adaptive IPs
Hardware Architecture
Makes AI run faster on small chips.
FPGA-based Acceleration for Convolutional Neural Networks: A Comprehensive Review
Machine Learning (CS)
Makes smart computer programs run faster and cheaper.
Implementation of high-efficiency, lightweight residual spiking neural network processor based on field-programmable gate arrays
Neural and Evolutionary Computing
Makes AI chips use less power for faster thinking.