HiMaCon: Discovering Hierarchical Manipulation Concepts from Unlabeled Multi-Modal Data
By: Ruizhe Liu , Pei Zhou , Qian Luo and more
Potential Business Impact:
Robots learn to do new tasks by watching.
Effective generalization in robotic manipulation requires representations that capture invariant patterns of interaction across environments and tasks. We present a self-supervised framework for learning hierarchical manipulation concepts that encode these invariant patterns through cross-modal sensory correlations and multi-level temporal abstractions without requiring human annotation. Our approach combines a cross-modal correlation network that identifies persistent patterns across sensory modalities with a multi-horizon predictor that organizes representations hierarchically across temporal scales. Manipulation concepts learned through this dual structure enable policies to focus on transferable relational patterns while maintaining awareness of both immediate actions and longer-term goals. Empirical evaluation across simulated benchmarks and real-world deployments demonstrates significant performance improvements with our concept-enhanced policies. Analysis reveals that the learned concepts resemble human-interpretable manipulation primitives despite receiving no semantic supervision. This work advances both the understanding of representation learning for manipulation and provides a practical approach to enhancing robotic performance in complex scenarios.
Similar Papers
RoboHiMan: A Hierarchical Evaluation Paradigm for Compositional Generalization in Long-Horizon Manipulation
Robotics
Helps robots learn and do new jobs.
Towards a Unified Understanding of Robot Manipulation: A Comprehensive Survey
Robotics
Helps robots learn to pick up and move things.
A Multimodal-Multitask Framework with Cross-modal Relation and Hierarchical Interactive Attention for Semantic Comprehension
CV and Pattern Recognition
Makes computers understand mixed information better.