Score: 1

Equilibrium Propagation Without Limits

Published: November 27, 2025 | arXiv ID: 2511.22024v1

By: Elon Litman

BigTech Affiliations: Stanford University

Potential Business Impact:

Makes AI learn faster with bigger, bolder steps.

Business Areas:
Intelligent Systems Artificial Intelligence, Data and Analytics, Science and Engineering

We liberate Equilibrium Propagation (EP) from the limit of infinitesimal perturbations by establishing a finite-nudge foundation for local credit assignment. By modeling network states as Gibbs-Boltzmann distributions rather than deterministic points, we prove that the gradient of the difference in Helmholtz free energy between a nudged and free phase is exactly the difference in expected local energy derivatives. This validates the classic Contrastive Hebbian Learning update as an exact gradient estimator for arbitrary finite nudging, requiring neither infinitesimal approximations nor convexity. Furthermore, we derive a generalized EP algorithm based on the path integral of loss-energy covariances, enabling learning with strong error signals that standard infinitesimal approximations cannot support.

Country of Origin
🇺🇸 United States

Page Count
18 pages

Category
Computer Science:
Machine Learning (CS)