Gradient Descent as Implicit EM in Distance-Based Neural Models
By: Alan Oursland
Neural networks trained with standard objectives exhibit behaviors characteristic of probabilistic inference: soft clustering, prototype specialization, and Bayesian uncertainty tracking. These phenomena appear across architectures -- in attention mechanisms, classification heads, and energy-based models -- yet existing explanations rely on loose analogies to mixture models or post-hoc architectural interpretation. We provide a direct derivation. For any objective with log-sum-exp structure over distances or energies, the gradient with respect to each distance is exactly the negative posterior responsibility of the corresponding component: $\partial L / \partial d_j = -r_j$. This is an algebraic identity, not an approximation. The immediate consequence is that gradient descent on such objectives performs expectation-maximization implicitly -- responsibilities are not auxiliary variables to be computed but gradients to be applied. No explicit inference algorithm is required because inference is embedded in optimization. This result unifies three regimes of learning under a single mechanism: unsupervised mixture modeling, where responsibilities are fully latent; attention, where responsibilities are conditioned on queries; and cross-entropy classification, where supervision clamps responsibilities to targets. The Bayesian structure recently observed in trained transformers is not an emergent property but a necessary consequence of the objective geometry. Optimization and inference are the same process.
Similar Papers
Gradient Dynamics of Attention: How Cross-Entropy Sculpts Bayesian Manifolds
Machine Learning (Stat)
Teaches computers to think like humans.
Detecting Model Misspecification in Bayesian Inverse Problems via Variational Gradient Descent
Methodology
Finds when computer models are wrong.
Gradient-bridged Posterior: Bayesian Inference for Models with Implicit Functions
Methodology
Makes complex math problems easier for computers.