Score: 2

Stronger ViTs With Octic Equivariance

Published: May 21, 2025 | arXiv ID: 2505.15441v2

By: David Nordström , Johan Edstedt , Fredrik Kahl and more

Potential Business Impact:

Makes computer vision faster and better.

Business Areas:
Image Recognition Data and Analytics, Software

Recent efforts at scaling computer vision models have established Vision Transformers (ViTs) as the leading architecture. ViTs incorporate weight sharing over image patches as an important inductive bias. In this work, we show that ViTs benefit from incorporating equivariance under the octic group, i.e., reflections and 90-degree rotations, as a further inductive bias. We develop new architectures, octic ViTs, that use octic-equivariant layers and put them to the test on both supervised and self-supervised learning. Through extensive experiments on DeiT-III and DINOv2 training on ImageNet-1K, we show that octic ViTs yield more computationally efficient networks while also improving performance. In particular, we achieve approximately 40% reduction in FLOPs for ViT-H while simultaneously improving both classification and segmentation results.

Country of Origin
🇸🇪 Sweden

Repos / Data Links

Page Count
22 pages

Category
Computer Science:
CV and Pattern Recognition