Score: 0

Energy-Efficient FPGA Framework for Non-Quantized Convolutional Neural Networks

Published: October 15, 2025 | arXiv ID: 2510.13362v1

By: Angelos Athanasiadis, Nikolaos Tampouratzis, Ioannis Papaefstathiou

Potential Business Impact:

Makes AI faster and smarter on small devices.

Business Areas:
Field-Programmable Gate Array (FPGA) Hardware

The growing demand for real-time processing in artificial intelligence applications, particularly those involving Convolutional Neural Networks (CNNs), has highlighted the need for efficient computational solutions. Conventional processors, very often, fall short in balancing performance, power consumption, and latency, especially in embedded systems and edge computing platforms. Field-Programmable Gate Arrays (FPGAs) offer a promising alternative, combining high performance with energy efficiency and reconfigurability. The presented framework addresses the complex and demanding computations of CNNs on FPGAs maintaining full precision in all neural network parameters. Specifically, our framework is based on Darknet which is very widely used for the design of CNNs and allows the designer, by using a similar input to that given to Darknet, to efficiently implement a CNN in a heterogeneous system comprising of CPUs and FPGAs. When compared with the FPGA frameworks that support quantization, our solution aims to offer similar performance and/or energy efficiency without any degradation on the NN accuracy.

Country of Origin
🇬🇷 Greece

Page Count
2 pages

Category
Computer Science:
Hardware Architecture