INTENTION: Inferring Tendencies of Humanoid Robot Motion Through Interactive Intuition and Grounded VLM
By: Jin Wang , Weijie Wang , Boyuan Deng and more
Potential Business Impact:
Robots learn to do new tasks by watching and remembering.
Traditional control and planning for robotic manipulation heavily rely on precise physical models and predefined action sequences. While effective in structured environments, such approaches often fail in real-world scenarios due to modeling inaccuracies and struggle to generalize to novel tasks. In contrast, humans intuitively interact with their surroundings, demonstrating remarkable adaptability, making efficient decisions through implicit physical understanding. In this work, we propose INTENTION, a novel framework enabling robots with learned interactive intuition and autonomous manipulation in diverse scenarios, by integrating Vision-Language Models (VLMs) based scene reasoning with interaction-driven memory. We introduce Memory Graph to record scenes from previous task interactions which embodies human-like understanding and decision-making about different tasks in real world. Meanwhile, we design an Intuitive Perceptor that extracts physical relations and affordances from visual scenes. Together, these components empower robots to infer appropriate interaction behaviors in new scenes without relying on repetitive instructions. Videos: https://robo-intention.github.io
Similar Papers
IntentionVLA: Generalizable and Efficient Embodied Intention Reasoning for Human-Robot Interaction
Robotics
Robots understand what you want without you saying it.
Mind to Hand: Purposeful Robotic Control via Embodied Reasoning
Robotics
Robots learn to do tasks by watching and thinking.
Utilizing Vision-Language Models as Action Models for Intent Recognition and Assistance
Robotics
Robot understands what you want and helps you.