Score: 2

VAT: Vision Action Transformer by Unlocking Full Representation of ViT

Published: December 3, 2025 | arXiv ID: 2512.06013v1

By: Wenhao Li, Chengwei Ma, Weixin Mao

Potential Business Impact:

Robots learn better by using all vision information.

Business Areas:
Image Recognition Data and Analytics, Software

In robot learning, Vision Transformers (ViTs) are standard for visual perception, yet most methods discard valuable information by using only the final layer's features. We argue this provides an insufficient representation and propose the Vision Action Transformer (VAT), a novel architecture that is extended from ViT and unlocks the full feature hierarchy of ViT. VAT processes specialized action tokens with visual features across all transformer layers, enabling a deep and progressive fusion of perception and action generation. On a suite of simulated manipulation tasks, VAT achieves a 98.15\% average success rate across four LIBERO benchmarks, establishing a new state-of-the-art by outperforming prior methods like OpenVLA-OFT. Our work presents not only a powerful model for imitation learning but also demonstrates the critical importance of leveraging the complete ''representation trajectory'' of vision models to advance robotic policy. The GitHub URL for the project code is https://github.com/sellerbubble/VAT.

Repos / Data Links

Page Count
20 pages

Category
Computer Science:
CV and Pattern Recognition