Generative Trajectory Stitching through Diffusion Composition
By: Yunhao Luo , Utkarsh A. Mishra , Yilun Du and more
Potential Business Impact:
Robots learn to solve new tasks by combining old skills.
Effective trajectory stitching for long-horizon planning is a significant challenge in robotic decision-making. While diffusion models have shown promise in planning, they are limited to solving tasks similar to those seen in their training data. We propose CompDiffuser, a novel generative approach that can solve new tasks by learning to compositionally stitch together shorter trajectory chunks from previously seen tasks. Our key insight is modeling the trajectory distribution by subdividing it into overlapping chunks and learning their conditional relationships through a single bidirectional diffusion model. This allows information to propagate between segments during generation, ensuring physically consistent connections. We conduct experiments on benchmark tasks of various difficulties, covering different environment sizes, agent state dimension, trajectory types, training data quality, and show that CompDiffuser significantly outperforms existing methods.
Similar Papers
State-Covering Trajectory Stitching for Diffusion Planners
Machine Learning (CS)
Makes robots learn longer tasks from short examples.
TransDiffuser: Diverse Trajectory Generation with Decorrelated Multi-modal Representation for End-to-end Autonomous Driving
Robotics
Helps self-driving cars plan safer, varied routes.
3D-CovDiffusion: 3D-Aware Diffusion Policy for Coverage Path Planning
Robotics
Robots learn to paint and polish perfectly.