Scaling Equilibrium Propagation to Deeper Neural Network Architectures
By: Sankar Vinayak. E. P, Gopalakrishnan Srinivasan
Potential Business Impact:
Makes brain-like computers learn like real ones.
Equilibrium propagation has been proposed as a biologically plausible alternative to the backpropagation algorithm. The local nature of gradient computations, combined with the use of convergent RNNs to reach equilibrium states, make this approach well-suited for implementation on neuromorphic hardware. However, previous studies on equilibrium propagation have been restricted to networks containing only dense layers or relatively small architectures with a few convolutional layers followed by a final dense layer. These networks have a significant gap in accuracy compared to similarly sized feedforward networks trained with backpropagation. In this work, we introduce the Hopfield-Resnet architecture, which incorporates residual (or skip) connections in Hopfield networks with clipped $\mathrm{ReLU}$ as the activation function. The proposed architectural enhancements enable the training of networks with nearly twice the number of layers reported in prior works. For example, Hopfield-Resnet13 achieves 93.92\% accuracy on CIFAR-10, which is $\approx$3.5\% higher than the previous best result and comparable to that provided by Resnet13 trained using backpropagation.
Similar Papers
Toward Practical Equilibrium Propagation: Brain-inspired Recurrent Neural Network with Feedback Regulation and Residual Connections
Neural and Evolutionary Computing
Makes AI learn faster and cheaper, like a brain.
StochEP: Stochastic Equilibrium Propagation for Spiking Convergent Recurrent Neural Networks
Emerging Technologies
Trains brain-like computers to learn faster and better.
Learning Dynamics in Memristor-Based Equilibrium Propagation
Machine Learning (CS)
Makes computers learn faster and use less power.