Spike Agreement Dependent Plasticity: A scalable Bio-Inspired learning paradigm for Spiking Neural Networks
By: Saptarshi Bej , Muhammed Sahad E , Gouri Lakshmi and more
Potential Business Impact:
Makes brain-like computers learn faster and better.
We introduce Spike Agreement Dependent Plasticity (SADP), a biologically inspired synaptic learning rule for Spiking Neural Networks (SNNs) that relies on the agreement between pre- and post-synaptic spike trains rather than precise spike-pair timing. SADP generalizes classical Spike-Timing-Dependent Plasticity (STDP) by replacing pairwise temporal updates with population-level correlation metrics such as Cohen's kappa. The SADP update rule admits linear-time complexity and supports efficient hardware implementation via bitwise logic. Empirical results on MNIST and Fashion-MNIST show that SADP, especially when equipped with spline-based kernels derived from our experimental iontronic organic memtransistor device data, outperforms classical STDP in both accuracy and runtime. Our framework bridges the gap between biological plausibility and computational scalability, offering a viable learning mechanism for neuromorphic systems.
Similar Papers
Synchrony-Gated Plasticity with Dopamine Modulation for Spiking Neural Networks
Neural and Evolutionary Computing
Makes AI learn better by mimicking brain signals.
Learning with Spike Synchrony in Spiking Neural Networks
Neural and Evolutionary Computing
Teaches computers to learn like brains.
Efficient Training of Spiking Neural Networks by Spike-aware Data Pruning
Neural and Evolutionary Computing
Trains smart computer brains much faster.