CMOS Implementation of Field Programmable Spiking Neural Network for Hardware Reservoir Computing
By: Ckristian Duran , Nanako Kimura , Zolboo Byambadorj and more
Potential Business Impact:
Makes smart computer brains use less power.
The increasing complexity and energy demands of large-scale neural networks, such as Deep Neural Networks (DNNs) and Large Language Models (LLMs), challenge their practical deployment in edge applications due to high power consumption, area requirements, and privacy concerns. Spiking Neural Networks (SNNs), particularly in analog implementations, offer a promising low-power alternative but suffer from noise sensitivity and connectivity limitations. This work presents a novel CMOS-implemented field-programmable neural network architecture for hardware reservoir computing. We propose a Leaky Integrate-and-Fire (LIF) neuron circuit with integrated voltage-controlled oscillators (VCOs) and programmable weighted interconnections via an on-chip FPGA framework, enabling arbitrary reservoir configurations. The system demonstrates effective implementation of the FORCE algorithm learning, linear and non-linear memory capacity benchmarks, and NARMA10 tasks, both in simulation and actual chip measurements. The neuron design achieves compact area utilization (around 540 NAND2-equivalent units) and low energy consumption (21.7 pJ/pulse) without requiring ADCs for information readout, making it ideal for system-on-chip integration of reservoir computing. This architecture paves the way for scalable, energy-efficient neuromorphic systems capable of performing real-time learning and inference with high configurability and digital interfacing.
Similar Papers
Biological Intuition on Digital Hardware: An RTL Implementation of Poisson-Encoded SNNs for Static Image Classification
Hardware Architecture
Makes smart devices use less power for thinking.
Implementation of high-efficiency, lightweight residual spiking neural network processor based on field-programmable gate arrays
Neural and Evolutionary Computing
Makes AI chips use less power for faster thinking.
Ultra-Low-Power Spiking Neurons in 7 nm FinFET Technology: A Comparative Analysis of Leaky Integrate-and-Fire, Morris-Lecar, and Axon-Hillock Architectures
Neural and Evolutionary Computing
Makes computer brains faster and use less power.