Staggered Environment Resets Improve Massively Parallel On-Policy Reinforcement Learning
By: Sid Bharthulwar, Stone Tao, Hao Su
Potential Business Impact:
Makes robots learn faster and better.
Massively parallel GPU simulation environments have accelerated reinforcement learning (RL) research by enabling fast data collection for on-policy RL algorithms like Proximal Policy Optimization (PPO). To maximize throughput, it is common to use short rollouts per policy update, increasing the update-to-data (UTD) ra- tio. However, we find that, in this setting, standard synchronous resets introduce harmful nonstationarity, skewing the learning signal and destabilizing training. We introduce staggered resets, a simple yet effective technique where environments are initialized and reset at varied points within the task horizon. This yields training batches with greater temporal diversity, reducing the nonstationarity induced by synchronized rollouts. We characterize dimensions along which RL environments can benefit significantly from staggered resets through illustrative toy environ- ments. We then apply this technique to challenging high-dimensional robotics environments, achieving significantly higher sample efficiency, faster wall-clock convergence, and stronger final performance. Finally, this technique scales better with more parallel environments compared to naive synchronized rollouts.
Similar Papers
Periodic Asynchrony: An Effective Method for Accelerating On-Policy Reinforcement Learning
Machine Learning (CS)
Makes computer learning much faster and cheaper.
The Impact of On-Policy Parallelized Data Collection on Deep Reinforcement Learning Networks
Machine Learning (CS)
Makes robots learn faster by collecting more data.
Efficient Adaptation of Reinforcement Learning Agents to Sudden Environmental Change
Machine Learning (CS)
Helps robots learn new tricks without forgetting old ones.