Long-VLA: Unleashing Long-Horizon Capability of Vision Language Action Model for Robot Manipulation
By: Yiguo Fan , Pengxiang Ding , Shuanghao Bai and more
Potential Business Impact:
Robots learn to do many steps in a row.
Vision-Language-Action (VLA) models have become a cornerstone in robotic policy learning, leveraging large-scale multimodal data for robust and scalable control. However, existing VLA frameworks primarily address short-horizon tasks, and their effectiveness on long-horizon, multi-step robotic manipulation remains limited due to challenges in skill chaining and subtask dependencies. In this work, we introduce Long-VLA, the first end-to-end VLA model specifically designed for long-horizon robotic tasks. Our approach features a novel phase-aware input masking strategy that adaptively segments each subtask into moving and interaction phases, enabling the model to focus on phase-relevant sensory cues and enhancing subtask compatibility. This unified strategy preserves the scalability and data efficiency of VLA training, and our architecture-agnostic module can be seamlessly integrated into existing VLA models. We further propose the L-CALVIN benchmark to systematically evaluate long-horizon manipulation. Extensive experiments on both simulated and real-world tasks demonstrate that Long-VLA significantly outperforms prior state-of-the-art methods, establishing a new baseline for long-horizon robotic control.
Similar Papers
LoHoVLA: A Unified Vision-Language-Action Model for Long-Horizon Embodied Tasks
Robotics
Robots learn to do many steps to finish tasks.
EvoVLA: Self-Evolving Vision-Language-Action Model
CV and Pattern Recognition
Robots learn to do long, tricky jobs better.
EchoVLA: Robotic Vision-Language-Action Model with Synergistic Declarative Memory for Mobile Manipulation
Robotics
Helps robots remember and do tasks across rooms.