Posterior Sampling of Probabilistic Word Embeddings
By: Väinö Yrjänäinen , Isac Boström , Måns Magnusson and more
Potential Business Impact:
Makes computer words more sure of their meaning.
Quantifying uncertainty in word embeddings is crucial for reliable inference from textual data. However, existing Bayesian methods such as Hamiltonian Monte Carlo (HMC) and mean-field variational inference (MFVI) are either computationally infeasible for large data or rely on restrictive assumptions. We propose a scalable Gibbs sampler using Polya-Gamma augmentation as well as Laplace approximation and compare them with MFVI and HMC for word embeddings. In addition, we address non-identifiability in word embeddings. Our Gibbs sampler and HMC correctly estimate uncertainties, while MFVI does not, and Laplace approximation only does so on large sample sizes, as expected. Applying the Gibbs sampler to the US Congress and the Movielens datasets, we demonstrate the feasibility on larger real data. Finally, as a result of having draws from the full posterior, we show that the posterior mean of word embeddings improves over maximum a posteriori (MAP) estimates in terms of hold-out likelihood, especially for smaller sampling sizes, further strengthening the need for posterior sampling of word embeddings.
Similar Papers
Accelerating Hamiltonian Monte Carlo for Bayesian Inference in Neural Networks and Neural Operators
Machine Learning (Stat)
Makes AI smarter by understanding what it doesn't know.
Modernizing full posterior inference for surrogate modeling of categorical-output simulation experiments
Computation
Helps computers learn from huge amounts of data.
Provable Diffusion Posterior Sampling for Bayesian Inversion
Machine Learning (Stat)
Makes computers learn and guess better from data.