Vision-Language-Policy Model for Dynamic Robot Task Planning
By: Jin Wang , Kim Tien Ly , Jacques Cloete and more
Bridging the gap between natural language commands and autonomous execution in unstructured environments remains an open challenge for robotics. This requires robots to perceive and reason over the current task scene through multiple modalities, and to plan their behaviors to achieve their intended goals. Traditional robotic task-planning approaches often struggle to bridge low-level execution with high-level task reasoning, and cannot dynamically update task strategies when instructions change during execution, which ultimately limits their versatility and adaptability to new tasks. In this work, we propose a novel language model-based framework for dynamic robot task planning. Our Vision-Language-Policy (VLP) model, based on a vision-language model fine-tuned on real-world data, can interpret semantic instructions and integrate reasoning over the current task scene to generate behavior policies that control the robot to accomplish the task. Moreover, it can dynamically adjust the task strategy in response to changes in the task, enabling flexible adaptation to evolving task requirements. Experiments conducted with different robots and a variety of real-world tasks show that the trained model can efficiently adapt to novel scenarios and dynamically update its policy, demonstrating strong planning autonomy and cross-embodiment generalization. Videos: https://robovlp.github.io/
Similar Papers
ExploreVLM: Closed-Loop Robot Exploration Task Planning with Vision-Language Models
Robotics
Robots learn to explore and do tasks better.
10 Open Challenges Steering the Future of Vision-Language-Action Models
Robotics
Robots learn to follow spoken commands and act.
See Once, Then Act: Vision-Language-Action Model with Task Learning from One-Shot Video Demonstrations
Robotics
Robots learn new tasks from just one video.