F1: A Vision-Language-Action Model Bridging Understanding and Generation to Actions
By: Qi Lv , Weijie Kong , Hao Li and more
Potential Business Impact:
Helps robots plan ahead to do tasks better.
Executing language-conditioned tasks in dynamic visual environments remains a central challenge in embodied AI. Existing Vision-Language-Action (VLA) models predominantly adopt reactive state-to-action mappings, often leading to short-sighted behaviors and poor robustness in dynamic scenes. In this paper, we introduce F1, a pretrained VLA framework which integrates the visual foresight generation into decision-making pipeline. F1 adopts a Mixture-of-Transformer architecture with dedicated modules for perception, foresight generation, and control, thereby bridging understanding, generation, and actions. At its core, F1 employs a next-scale prediction mechanism to synthesize goal-conditioned visual foresight as explicit planning targets. By forecasting plausible future visual states, F1 reformulates action generation as a foresight-guided inverse dynamics problem, enabling actions that implicitly achieve visual goals. To endow F1 with robust and generalizable capabilities, we propose a three-stage training recipe on an extensive dataset comprising over 330k trajectories across 136 diverse tasks. This training scheme enhances modular reasoning and equips the model with transferable visual foresight, which is critical for complex and dynamic environments. Extensive evaluations on real-world tasks and simulation benchmarks demonstrate F1 consistently outperforms existing approaches, achieving substantial gains in both task success rate and generalization ability.
Similar Papers
F1: A Vision-Language-Action Model Bridging Understanding and Generation to Actions
Robotics
Helps robots see the future to act better.
Vision-Language-Action Models: Concepts, Progress, Applications and Challenges
CV and Pattern Recognition
Robots understand what they see and hear to act.
FPC-VLA: A Vision-Language-Action Framework with a Supervisor for Failure Prediction and Correction
Robotics
Robots learn to fix their own mistakes.