De Finetti + Sanov = Bayes
By: Nicholas G. Polson, Daniel Zantedeschi
Potential Business Impact:
Makes computers learn from less data.
We develop a framework for the operationalization of models and parameters by combining de Finetti's representation theorem with a conditional form of Sanov's theorem. This synthesis, the tilted de Finetti theorem, shows that conditioning exchangeable sequences on empirical moment constraints yields predictive laws in exponential families via the I-projection of a baseline measure. Parameters emerge as limits of empirical functionals, providing a probabilistic foundation for maximum entropy (MaxEnt) principles. This explains why exponential tilting governs likelihood methods and Bayesian updating, connecting naturally to finite-sample concentration rates that anticipate PAC-Bayes bounds. Examples include Gaussian scale mixtures, where symmetry uniquely selects location-scale families, and Jaynes' Brandeis dice problem, where partial information tilts the uniform law. Broadly, the theorem unifies exchangeability, large deviations, and entropy concentration, clarifying the ubiquity of exponential families and MaxEnt's role as the inevitable predictive limit under partial information.
Similar Papers
Bayesian Prediction under Moment Conditioning
Statistics Theory
Helps computers guess better with missing info.
Generalized Bayes in Conditional Moment Restriction Models
Econometrics
Helps economists understand how companies make things.
Variational Bernstein-von Mises theorem with increasing parameter dimension
Statistics Theory
Proves fast stats shortcuts work for big data