Deep Gaussian Process Proximal Policy Optimization
By: Matthijs van der Lende, Juan Cardenas-Cartagena
Potential Business Impact:
Helps robots learn safely and explore better.
Uncertainty estimation for Reinforcement Learning (RL) is a critical component in control tasks where agents must balance safe exploration and efficient learning. While deep neural networks have enabled breakthroughs in RL, they often lack calibrated uncertainty estimates. We introduce Deep Gaussian Process Proximal Policy Optimization (GPPO), a scalable, model-free actor-critic algorithm that leverages Deep Gaussian Processes (DGPs) to approximate both the policy and value function. GPPO maintains competitive performance with respect to Proximal Policy Optimization on standard high-dimensional continuous control benchmarks while providing well-calibrated uncertainty estimates that can inform safer and more effective exploration.
Similar Papers
Overcoming Overfitting in Reinforcement Learning via Gaussian Process Diffusion Policy
Machine Learning (CS)
Helps robots learn new tricks even when things change.
Reparameterization Proximal Policy Optimization
Machine Learning (CS)
Teaches robots to learn faster and more reliably.
DVPO: Distributional Value Modeling-based Policy Optimization for LLM Post-Training
Machine Learning (CS)
Teaches AI to learn better from messy information.