Gaussian-Mixture-Model Q-Functions for Policy Iteration in Reinforcement Learning
By: Minh Vu, Konstantinos Slavakis
Unlike their conventional use as estimators of probability density functions in reinforcement learning (RL), this paper introduces a novel function-approximation role for Gaussian mixture models (GMMs) as direct surrogates for Q-function losses. These parametric models, termed GMM-QFs, possess substantial representational capacity, as they are shown to be universal approximators over a broad class of functions. They are further embedded within Bellman residuals, where their learnable parameters -- a fixed number of mixing weights, together with Gaussian mean vectors and covariance matrices -- are inferred from data via optimization on a Riemannian manifold. This geometric perspective on the parameter space naturally incorporates Riemannian optimization into the policy-evaluation step of standard policy-iteration frameworks. Rigorous theoretical results are established, and supporting numerical tests show that, even without access to experience data, GMM-QFs deliver competitive performance and, in some cases, outperform state-of-the-art approaches across a range of benchmark RL tasks, all while maintaining a significantly smaller computational footprint than deep-learning methods that rely on experience data.
Similar Papers
Online reinforcement learning via sparse Gaussian mixture model Q-functions
Machine Learning (CS)
Teaches computers to learn faster with less data.
Generalized Bayes in Conditional Moment Restriction Models
Econometrics
Helps economists understand how companies make things.
Diffusion Fine-Tuning via Reparameterized Policy Gradient of the Soft Q-Function
Machine Learning (CS)
Makes AI art look better and more natural.