Score: 1

StochEP: Stochastic Equilibrium Propagation for Spiking Convergent Recurrent Neural Networks

Published: November 14, 2025 | arXiv ID: 2511.11320v1

By: Jiaqi Lin, Yi Jiang, Abhronil Sengupta

Potential Business Impact:

Trains brain-like computers to learn faster and better.

Business Areas:
Neuroscience Biotechnology, Science and Engineering

Spiking Neural Networks (SNNs) promise energy-efficient, sparse, biologically inspired computation. Training them with Backpropagation Through Time (BPTT) and surrogate gradients achieves strong performance but remains biologically implausible. Equilibrium Propagation (EP) provides a more local and biologically grounded alternative. However, existing EP frameworks, primarily based on deterministic neurons, either require complex mechanisms to handle discontinuities in spiking dynamics or fail to scale beyond simple visual tasks. Inspired by the stochastic nature of biological spiking mechanism and recent hardware trends, we propose a stochastic EP framework that integrates probabilistic spiking neurons into the EP paradigm. This formulation smoothens the optimization landscape, stabilizes training, and enables scalable learning in deep convolutional spiking convergent recurrent neural networks (CRNNs). We provide theoretical guarantees showing that the proposed stochastic EP dynamics approximate deterministic EP under mean-field theory, thereby inheriting its underlying theoretical guarantees. The proposed framework narrows the gap to both BPTT-trained SNNs and EP-trained non-spiking CRNNs in vision benchmarks while preserving locality, highlighting stochastic EP as a promising direction for neuromorphic and on-chip learning.

Country of Origin
🇺🇸 United States

Page Count
15 pages

Category
Computer Science:
Emerging Technologies