Learning Dynamics in Memristor-Based Equilibrium Propagation
By: Michael Döll, Andreas Müller, Bernd Ulmann
Potential Business Impact:
Makes computers learn faster and use less power.
Memristor-based in-memory computing has emerged as a promising paradigm to overcome the constraints of the von Neumann bottleneck and the memory wall by enabling fully parallelisable and energy-efficient vector-matrix multiplications. We investigate the effect of nonlinear, memristor-driven weight updates on the convergence behaviour of neural networks trained with equilibrium propagation (EqProp). Six memristor models were characterised by their voltage-current hysteresis and integrated into the EBANA framework for evaluation on two benchmark classification tasks. EqProp can achieve robust convergence under nonlinear weight updates, provided that memristors exhibit a sufficiently wide resistance range of at least an order of magnitude.
Similar Papers
Learning at the Speed of Physics: Equilibrium Propagation on Oscillator Ising Machines
Machine Learning (CS)
Computers learn faster by copying how nature works.
Equilibrium Propagation Without Limits
Machine Learning (CS)
Makes AI learn faster with bigger, bolder steps.
Toward Lifelong Learning in Equilibrium Propagation: Sleep-like and Awake Rehearsal for Enhanced Stability
Machine Learning (CS)
Helps AI remember old lessons when learning new ones.