VAT: Vision Action Transformer by Unlocking Full Representation of ViT
By: Wenhao Li, Chengwei Ma, Weixin Mao
Potential Business Impact:
Robots learn better by using all vision information.
In robot learning, Vision Transformers (ViTs) are standard for visual perception, yet most methods discard valuable information by using only the final layer's features. We argue this provides an insufficient representation and propose the Vision Action Transformer (VAT), a novel architecture that is extended from ViT and unlocks the full feature hierarchy of ViT. VAT processes specialized action tokens with visual features across all transformer layers, enabling a deep and progressive fusion of perception and action generation. On a suite of simulated manipulation tasks, VAT achieves a 98.15\% average success rate across four LIBERO benchmarks, establishing a new state-of-the-art by outperforming prior methods like OpenVLA-OFT. Our work presents not only a powerful model for imitation learning but also demonstrates the critical importance of leveraging the complete ''representation trajectory'' of vision models to advance robotic policy. The GitHub URL for the project code is https://github.com/sellerbubble/VAT.
Similar Papers
Unifying Perception and Action: A Hybrid-Modality Pipeline with Implicit Visual Chain-of-Thought for Robotic Action Generation
Robotics
Robot learns to do tasks by watching and thinking.
VITA-VLA: Efficiently Teaching Vision-Language Models to Act via Action Expert Distillation
CV and Pattern Recognition
Teaches robots to do tasks using sight and words.
ViT-Linearizer: Distilling Quadratic Knowledge into Linear-Time Vision Models
CV and Pattern Recognition
Makes computer vision faster and better.