State-Covering Trajectory Stitching for Diffusion Planners
By: Kyowoon Lee, Jaesik Choi
Potential Business Impact:
Makes robots learn longer tasks from short examples.
Diffusion-based generative models are emerging as powerful tools for long-horizon planning in reinforcement learning (RL), particularly with offline datasets. However, their performance is fundamentally limited by the quality and diversity of training data. This often restricts their generalization to tasks outside their training distribution or longer planning horizons. To overcome this challenge, we propose State-Covering Trajectory Stitching (SCoTS), a novel reward-free trajectory augmentation method that incrementally stitches together short trajectory segments, systematically generating diverse and extended trajectories. SCoTS first learns a temporal distance-preserving latent representation that captures the underlying temporal structure of the environment, then iteratively stitches trajectory segments guided by directional exploration and novelty to effectively cover and expand this latent space. We demonstrate that SCoTS significantly improves the performance and generalization capabilities of diffusion planners on offline goal-conditioned benchmarks requiring stitching and long-horizon reasoning. Furthermore, augmented trajectories generated by SCoTS significantly improve the performance of widely used offline goal-conditioned RL algorithms across diverse environments.
Similar Papers
Generative Trajectory Stitching through Diffusion Composition
Robotics
Robots learn to solve new tasks by combining old skills.
ASTRO: Adaptive Stitching via Dynamics-Guided Trajectory Rollouts
Machine Learning (CS)
Makes AI learn better from old data.
SkillMimic-V2: Learning Robust and Generalizable Interaction Skills from Sparse and Noisy Demonstrations
Machine Learning (CS)
Teaches robots new skills from messy examples.