Score: 0

Deep Gaussian Process Proximal Policy Optimization

Published: November 22, 2025 | arXiv ID: 2511.18214v1

By: Matthijs van der Lende, Juan Cardenas-Cartagena

Potential Business Impact:

Helps robots learn safely and explore better.

Business Areas:
Risk Management Professional Services

Uncertainty estimation for Reinforcement Learning (RL) is a critical component in control tasks where agents must balance safe exploration and efficient learning. While deep neural networks have enabled breakthroughs in RL, they often lack calibrated uncertainty estimates. We introduce Deep Gaussian Process Proximal Policy Optimization (GPPO), a scalable, model-free actor-critic algorithm that leverages Deep Gaussian Processes (DGPs) to approximate both the policy and value function. GPPO maintains competitive performance with respect to Proximal Policy Optimization on standard high-dimensional continuous control benchmarks while providing well-calibrated uncertainty estimates that can inform safer and more effective exploration.

Country of Origin
🇳🇱 Netherlands

Page Count
13 pages

Category
Computer Science:
Machine Learning (CS)