Exploration and Adaptation in Non-Stationary Tasks with Diffusion Policies
By: Gunbir Singh Baveja
Potential Business Impact:
Teaches robots to learn new tasks quickly.
This paper investigates the application of Diffusion Policy in non-stationary, vision-based RL settings, specifically targeting environments where task dynamics and objectives evolve over time. Our work is grounded in practical challenges encountered in dynamic real-world scenarios such as robotics assembly lines and autonomous navigation, where agents must adapt control strategies from high-dimensional visual inputs. We apply Diffusion Policy -- which leverages iterative stochastic denoising to refine latent action representations-to benchmark environments including Procgen and PointMaze. Our experiments demonstrate that, despite increased computational demands, Diffusion Policy consistently outperforms standard RL methods such as PPO and DQN, achieving higher mean and maximum rewards with reduced variability. These findings underscore the approach's capability to generate coherent, contextually relevant action sequences in continuously shifting conditions, while also highlighting areas for further improvement in handling extreme non-stationarity.
Similar Papers
Adaptive Diffusion Policy Optimization for Robotic Manipulation
Robotics
Teaches robots to learn tasks faster and better.
Fine-tuning Diffusion Policies with Backpropagation Through Diffusion Timesteps
Machine Learning (CS)
Makes robots learn faster and better from mistakes.
ADPro: a Test-time Adaptive Diffusion Policy for Robot Manipulation via Manifold and Initial Noise Constraints
Robotics
Robots learn to do tasks faster and better.