Reparameterization Proximal Policy Optimization
By: Hai Zhong , Xun Wang , Zhuoran Li and more
Potential Business Impact:
Teaches robots to learn faster and more reliably.
Reparameterization policy gradient (RPG) is promising for improving sample efficiency by leveraging differentiable dynamics. However, a critical barrier is its training instability, where high-variance gradients can destabilize the learning process. To address this, we draw inspiration from Proximal Policy Optimization (PPO), which uses a surrogate objective to enable stable sample reuse in the model-free setting. We first establish a connection between this surrogate objective and RPG, which has been largely unexplored and is non-trivial. Then, we bridge this gap by demonstrating that the reparameterization gradient of a PPO-like surrogate objective can be computed efficiently using backpropagation through time. Based on this key insight, we propose Reparameterization Proximal Policy Optimization (RPO), a stable and sample-efficient RPG-based method. RPO enables multiple epochs of stable sample reuse by optimizing a clipped surrogate objective tailored for RPG, while being further stabilized by Kullback-Leibler (KL) divergence regularization and remaining fully compatible with existing variance reduction methods. We evaluate RPO on a suite of challenging locomotion and manipulation tasks, where experiments demonstrate that our method achieves superior sample efficiency and strong performance.
Similar Papers
Robust and Diverse Multi-Agent Learning via Rational Policy Gradient
Artificial Intelligence
Teaches AI to work together without hurting itself.
Deep Gaussian Process Proximal Policy Optimization
Machine Learning (CS)
Helps robots learn safely and explore better.
Reusing Trajectories in Policy Gradients Enables Fast Convergence
Machine Learning (CS)
Teaches robots to learn faster from old tries.