Emergence of Nonequilibrium Latent Cycles in Unsupervised Generative Modeling
By: Marco Baiesi, Alberto Rosso
We show that nonequilibrium dynamics can play a constructive role in unsupervised machine learning by inducing the spontaneous emergence of latent-state cycles. We introduce a model in which visible and hidden variables interact through two independently parametrized transition matrices, defining a Markov chain whose steady state is intrinsically out of equilibrium. Likelihood maximization drives this system toward nonequilibrium steady states with finite entropy production, reduced self-transition probabilities, and persistent probability currents in the latent space. These cycles are not imposed by the architecture but arise from training, and models that develop them avoid the low-log-likelihood regime associated with nearly reversible dynamics while more faithfully reproducing the empirical distribution of data classes. Compared with equilibrium approaches such as restricted Boltzmann machines, our model breaks the detailed balance between the forward and backward conditional transitions and relies on a log-likelihood gradient that depends explicitly on the last two steps of the Markov chain. Hence, this exploration of the interface between nonequilibrium statistical physics and modern machine learning suggests that introducing irreversibility into latent-variable models can enhance generative performance.
Similar Papers
Learning by Steering the Neural Dynamics: A Statistical Mechanics Perspective
Machine Learning (CS)
Brain-like learning for computers, no backpropagation.
Quantum Neural Network Restatement of Markov Jump Process
Machine Learning (CS)
Helps computers learn from complex data faster.
A Statistical Physics of Language Model Reasoning
Artificial Intelligence
Explains how AI thinks, predicts mistakes.