RoboGround: Robotic Manipulation with Grounded Vision-Language Priors
By: Haifeng Huang , Xinyi Chen , Yilun Chen and more
Potential Business Impact:
Robots learn to move objects better with visual guides.
Recent advancements in robotic manipulation have highlighted the potential of intermediate representations for improving policy generalization. In this work, we explore grounding masks as an effective intermediate representation, balancing two key advantages: (1) effective spatial guidance that specifies target objects and placement areas while also conveying information about object shape and size, and (2) broad generalization potential driven by large-scale vision-language models pretrained on diverse grounding datasets. We introduce RoboGround, a grounding-aware robotic manipulation system that leverages grounding masks as an intermediate representation to guide policy networks in object manipulation tasks. To further explore and enhance generalization, we propose an automated pipeline for generating large-scale, simulated data with a diverse set of objects and instructions. Extensive experiments show the value of our dataset and the effectiveness of grounding masks as intermediate guidance, significantly enhancing the generalization abilities of robot policies.
Similar Papers
Bridging Perception and Action: Spatially-Grounded Mid-Level Representations for Robot Generalization
Robotics
Teaches robots to do tricky jobs better.
AntiGrounding: Lifting Robotic Actions into VLM Representation Space for Decision Making
Robotics
Robots learn new tasks without practice.
Hierarchical Language Models for Semantic Navigation and Manipulation in an Aerial-Ground Robotic System
Robotics
Robots work together better using AI to move things.