Look, Zoom, Understand: The Robotic Eyeball for Embodied Perception
By: Jiashu Yang , Yifan Han , Yucheng Xie and more
Potential Business Impact:
Robotic eye learns to look and zoom for details.
In embodied AI perception systems, visual perception should be active: the goal is not to passively process static images, but to actively acquire more informative data within pixel and spatial budget constraints. Existing vision models and fixed RGB-D camera systems fundamentally fail to reconcile wide-area coverage with fine-grained detail acquisition, severely limiting their efficacy in open-world robotic applications. To address this issue, we propose EyeVLA, a robotic eyeball for active visual perception that can take proactive actions based on instructions, enabling clear observation of fine-grained target objects and detailed information across a wide spatial extent. EyeVLA discretizes action behaviors into action tokens and integrates them with vision-language models (VLMs) that possess strong open-world understanding capabilities, enabling joint modeling of vision, language, and actions within a single autoregressive sequence. By using the 2D bounding box coordinates to guide the reasoning chain and applying reinforcement learning to refine the viewpoint selection policy, we transfer the open-world scene understanding capability of the VLM to a vision language action (VLA) policy using only minimal real-world data. Experiments show that our system efficiently performs instructed scenes in real-world environments and actively acquires more accurate visual information through instruction-driven actions of rotation and zoom, thereby achieving strong environmental perception capabilities. EyeVLA introduces a novel robotic vision system that leverages detailed and spatially rich, large-scale embodied data, and actively acquires highly informative visual observations for downstream embodied tasks.
Similar Papers
ReconVLA: Reconstructive Vision-Language-Action Model as Effective Robot Perceiver
Robotics
Teaches robots to look where they need to work.
AVA-VLA: Improving Vision-Language-Action models with Active Visual Attention
Machine Learning (CS)
Helps robots learn tasks by remembering past actions.
Eye, Robot: Learning to Look to Act with a BC-RL Perception-Action Loop
Robotics
Robot learns to see and move to do tasks.