Score: 0

Near-Optimality of Contrastive Divergence Algorithms

Published: October 15, 2025 | arXiv ID: 2510.13438v1

By: Pierre Glaser, Kevin Han Huang, Arthur Gretton

Potential Business Impact:

Makes computer learning faster and more accurate.

Business Areas:
A/B Testing Data and Analytics

We perform a non-asymptotic analysis of the contrastive divergence (CD) algorithm, a training method for unnormalized models. While prior work has established that (for exponential family distributions) the CD iterates asymptotically converge at an $O(n^{-1 / 3})$ rate to the true parameter of the data distribution, we show, under some regularity assumptions, that CD can achieve the parametric rate $O(n^{-1 / 2})$. Our analysis provides results for various data batching schemes, including the fully online and minibatch ones. We additionally show that CD can be near-optimal, in the sense that its asymptotic variance is close to the Cram\'er-Rao lower bound.

Country of Origin
🇬🇧 United Kingdom

Page Count
54 pages

Category
Statistics:
Machine Learning (Stat)