Compose by Focus: Scene Graph-based Atomic Skills
By: Han Qi, Changhe Chen, Heng Yang
Potential Business Impact:
Robots learn to combine simple actions for new tasks.
A key requirement for generalist robots is compositional generalization - the ability to combine atomic skills to solve complex, long-horizon tasks. While prior work has primarily focused on synthesizing a planner that sequences pre-learned skills, robust execution of the individual skills themselves remains challenging, as visuomotor policies often fail under distribution shifts induced by scene composition. To address this, we introduce a scene graph-based representation that focuses on task-relevant objects and relations, thereby mitigating sensitivity to irrelevant variation. Building on this idea, we develop a scene-graph skill learning framework that integrates graph neural networks with diffusion-based imitation learning, and further combine "focused" scene-graph skills with a vision-language model (VLM) based task planner. Experiments in both simulation and real-world manipulation tasks demonstrate substantially higher success rates than state-of-the-art baselines, highlighting improved robustness and compositional generalization in long-horizon tasks.
Similar Papers
Iterative Compositional Data Generation for Robot Control
Robotics
Robots learn new tasks by combining old skills.
Task-Agnostic Experts Composition for Continual Learning
Machine Learning (CS)
AI learns to solve hard problems by breaking them down.
Symskill: Symbol and Skill Co-Invention for Data-Efficient and Real-Time Long-Horizon Manipulation
Robotics
Robots learn to do many tasks by watching.