Guardian: Detecting Robotic Planning and Execution Errors with Vision-Language Models
By: Paul Pacaud , Ricardo Garcia , Shizhe Chen and more
Potential Business Impact:
Teaches robots to fix their own mistakes.
Robust robotic manipulation requires reliable failure detection and recovery. Although current Vision-Language Models (VLMs) show promise, their accuracy and generalization are limited by the scarcity of failure data. To address this data gap, we propose an automatic robot failure synthesis approach that procedurally perturbs successful trajectories to generate diverse planning and execution failures. This method produces not only binary classification labels but also fine-grained failure categories and step-by-step reasoning traces in both simulation and the real world. With it, we construct three new failure detection benchmarks: RLBench-Fail, BridgeDataV2-Fail, and UR5-Fail, substantially expanding the diversity and scale of existing failure datasets. We then train Guardian, a VLM with multi-view images for detailed failure reasoning and detection. Guardian achieves state-of-the-art performance on both existing and newly introduced benchmarks. It also effectively improves task success rates when integrated into a state-of-the-art manipulation system in simulation and real robots, demonstrating the impact of our generated failure data.
Similar Papers
Guardian: Detecting Robotic Planning and Execution Errors with Vision-Language Models
Robotics
Helps robots learn from mistakes to do tasks better.
FailSafe: Reasoning and Recovery from Failures in Vision-Language-Action Models
Robotics
Robots learn to fix their own mistakes.
A Unified Framework for Real-Time Failure Handling in Robotics Using Vision-Language Models, Reactive Planner and Behavior Trees
Robotics
Robots fix mistakes while working, not just before.