Flexible Multitask Learning with Factorized Diffusion Policy
By: Chaoqi Liu , Haonan Chen , Sigmund H. Høeg and more
Multitask learning poses significant challenges due to the highly multimodal and diverse nature of robot action distributions. However, effectively fitting policies to these complex task distributions is often difficult, and existing monolithic models often underfit the action distribution and lack the flexibility required for efficient adaptation. We introduce a novel modular diffusion policy framework that factorizes complex action distributions into a composition of specialized diffusion models, each capturing a distinct sub-mode of the behavior space for a more effective overall policy. In addition, this modular structure enables flexible policy adaptation to new tasks by adding or fine-tuning components, which inherently mitigates catastrophic forgetting. Empirically, across both simulation and real-world robotic manipulation settings, we illustrate how our method consistently outperforms strong modular and monolithic baselines.
Similar Papers
A Flexible Field-Based Policy Learning Framework for Diverse Robotic Systems and Sensors
Robotics
Robots learn to grab and move things from examples.
Unified Multimodal Diffusion Forcing for Forceful Manipulation
Robotics
Teaches robots to learn from seeing, doing, and feeling.
Learning Diffusion Policies for Robotic Manipulation of Timber Joinery under Fabrication Uncertainty
Robotics
Robots build tricky parts even with mistakes.