ViT-VS: On the Applicability of Pretrained Vision Transformer Features for Generalizable Visual Servoing
By: Alessandro Scherl , Stefan Thalhammer , Bernhard Neuberger and more
Potential Business Impact:
Robots see and grab things better, even new ones.
Visual servoing enables robots to precisely position their end-effector relative to a target object. While classical methods rely on hand-crafted features and thus are universally applicable without task-specific training, they often struggle with occlusions and environmental variations, whereas learning-based approaches improve robustness but typically require extensive training. We present a visual servoing approach that leverages pretrained vision transformers for semantic feature extraction, combining the advantages of both paradigms while also being able to generalize beyond the provided sample. Our approach achieves full convergence in unperturbed scenarios and surpasses classical image-based visual servoing by up to 31.2\% relative improvement in perturbed scenarios. Even the convergence rates of learning-based methods are matched despite requiring no task- or object-specific training. Real-world evaluations confirm robust performance in end-effector positioning, industrial box manipulation, and grasping of unseen objects using only a reference from the same category. Our code and simulation environment are available at: https://alessandroscherl.github.io/ViT-VS/
Similar Papers
Learning Priors of Human Motion With Vision Transformers
CV and Pattern Recognition
Tracks people's movement and speed for robots.
Visual Instruction Pretraining for Domain-Specific Foundation Models
CV and Pattern Recognition
Teaches computers to see better using thinking.
High-Precision Transformer-Based Visual Servoing for Humanoid Robots in Aligning Tiny Objects
CV and Pattern Recognition
Helps robots precisely place tiny tool parts.