Equilibrium Propagation Without Limits
By: Elon Litman
Potential Business Impact:
Makes AI learn faster with bigger, bolder steps.
We liberate Equilibrium Propagation (EP) from the limit of infinitesimal perturbations by establishing a finite-nudge foundation for local credit assignment. By modeling network states as Gibbs-Boltzmann distributions rather than deterministic points, we prove that the gradient of the difference in Helmholtz free energy between a nudged and free phase is exactly the difference in expected local energy derivatives. This validates the classic Contrastive Hebbian Learning update as an exact gradient estimator for arbitrary finite nudging, requiring neither infinitesimal approximations nor convexity. Furthermore, we derive a generalized EP algorithm based on the path integral of loss-energy covariances, enabling learning with strong error signals that standard infinitesimal approximations cannot support.
Similar Papers
Scalable Equilibrium Propagation via Intermediate Error Signals for Deep Convolutional CRNNs
Machine Learning (CS)
Trains deep computer brains to learn much faster.
Learning at the Speed of Physics: Equilibrium Propagation on Oscillator Ising Machines
Machine Learning (CS)
Computers learn faster by copying how nature works.
Lagrangian-based Equilibrium Propagation: generalisation to arbitrary boundary conditions & equivalence with Hamiltonian Echo Learning
Machine Learning (CS)
Teaches computers to learn from changing information.