Improving Low-Latency Learning Performance in Spiking Neural Networks via a Change-Perceptive Dendrite-Soma-Axon Neuron
By: Zeyu Huang , Wei Meng , Quan Liu and more
Spiking neurons, the fundamental information processing units of Spiking Neural Networks (SNNs), have the all-or-zero information output form that allows SNNs to be more energy-efficient compared to Artificial Neural Networks (ANNs). However, the hard reset mechanism employed in spiking neurons leads to information degradation due to its uniform handling of diverse membrane potentials. Furthermore, the utilization of overly simplified neuron models that disregard the intricate biological structures inherently impedes the network's capacity to accurately simulate the actual potential transmission process. To address these issues, we propose a dendrite-soma-axon (DSA) neuron employing the soft reset strategy, in conjunction with a potential change-based perception mechanism, culminating in the change-perceptive dendrite-soma-axon (CP-DSA) neuron. Our model contains multiple learnable parameters that expand the representation space of neurons. The change-perceptive (CP) mechanism enables our model to achieve competitive performance in short time steps utilizing the difference information of adjacent time steps. Rigorous theoretical analysis is provided to demonstrate the efficacy of the CP-DSA model and the functional characteristics of its internal parameters. Furthermore, extensive experiments conducted on various datasets substantiate the significant advantages of the CP-DSA model over state-of-the-art approaches.
Similar Papers
Synchrony-Gated Plasticity with Dopamine Modulation for Spiking Neural Networks
Neural and Evolutionary Computing
Makes AI learn better by mimicking brain signals.
DS-ATGO: Dual-Stage Synergistic Learning via Forward Adaptive Threshold and Backward Gradient Optimization for Spiking Neural Networks
Neural and Evolutionary Computing
Makes brain-like computers learn better and use less power.
CogniSNN: Enabling Neuron-Expandability, Pathway-Reusability, and Dynamic-Configurability with Random Graph Architectures in Spiking Neural Networks
Neural and Evolutionary Computing
Brain-like AI learns more, remembers better, adapts faster.