Maxitive Donsker-Varadhan Formulation for Possibilistic Variational Inference
By: Jasraj Singh , Shelvia Wongso , Jeremie Houssineau and more
Potential Business Impact:
Lets computers learn better with less information.
Variational inference (VI) is a cornerstone of modern Bayesian learning, enabling approximate inference in complex models that would otherwise be intractable. However, its formulation depends on expectations and divergences defined through high-dimensional integrals, often rendering analytical treatment impossible and necessitating heavy reliance on approximate learning and inference techniques. Possibility theory, an imprecise probability framework, allows to directly model epistemic uncertainty instead of leveraging subjective probabilities. While this framework provides robustness and interpretability under sparse or imprecise information, adapting VI to the possibilistic setting requires rethinking core concepts such as entropy and divergence, which presuppose additivity. In this work, we develop a principled formulation of possibilistic variational inference and apply it to a special class of exponential-family functions, highlighting parallels with their probabilistic counterparts and revealing the distinctive mathematical structures of possibility theory.
Similar Papers
A Frequentist Statistical Introduction to Variational Inference, Autoencoders, and Diffusion Models
Machine Learning (Stat)
Teaches AI how to learn like humans.
Variational Inference for Latent Variable Models in High Dimensions
Statistics Theory
Makes computer models understand data better.
Variational Inference with Mixtures of Isotropic Gaussians
Machine Learning (Stat)
Finds better computer guesses for complex problems.