Score: 1

FPC-VLA: A Vision-Language-Action Framework with a Supervisor for Failure Prediction and Correction

Published: September 4, 2025 | arXiv ID: 2509.04018v1

By: Yifan Yang , Zhixiang Duan , Tianshi Xie and more

Potential Business Impact:

Robots learn to fix their own mistakes.

Business Areas:
Robotics Hardware, Science and Engineering, Software

Robotic manipulation is a fundamental component of automation. However, traditional perception-planning pipelines often fall short in open-ended tasks due to limited flexibility, while the architecture of a single end-to-end Vision-Language-Action (VLA) offers promising capabilities but lacks crucial mechanisms for anticipating and recovering from failure. To address these challenges, we propose FPC-VLA, a dual-model framework that integrates VLA with a supervisor for failure prediction and correction. The supervisor evaluates action viability through vision-language queries and generates corrective strategies when risks arise, trained efficiently without manual labeling. A similarity-guided fusion module further refines actions by leveraging past predictions. Evaluation results on multiple simulation platforms (SIMPLER and LIBERO) and robot embodiments (WidowX, Google Robot, Franka) show that FPC-VLA outperforms state-of-the-art models in both zero-shot and fine-tuned settings. By activating the supervisor only at keyframes, our approach significantly increases task success rates with minimal impact on execution time. Successful real-world deployments on diverse, long-horizon tasks confirm FPC-VLA's strong generalization and practical utility for building more reliable autonomous systems.

Page Count
9 pages

Category
Computer Science:
Robotics