Alias-Free ViT: Fractional Shift Invariance via Linear Attention
By: Hagay Michaeli, Daniel Soudry
Potential Business Impact:
Makes computer vision better at seeing small changes.
Transformers have emerged as a competitive alternative to convnets in vision tasks, yet they lack the architectural inductive bias of convnets, which may hinder their potential performance. Specifically, Vision Transformers (ViTs) are not translation-invariant and are more sensitive to minor image translations than standard convnets. Previous studies have shown, however, that convnets are also not perfectly shift-invariant, due to aliasing in downsampling and nonlinear layers. Consequently, anti-aliasing approaches have been proposed to certify convnets' translation robustness. Building on this line of work, we propose an Alias-Free ViT, which combines two main components. First, it uses alias-free downsampling and nonlinearities. Second, it uses linear cross-covariance attention that is shift-equivariant to both integer and fractional translations, enabling a shift-invariant global representation. Our model maintains competitive performance in image classification and outperforms similar-sized models in terms of robustness to adversarial translations.
Similar Papers
ViT-Linearizer: Distilling Quadratic Knowledge into Linear-Time Vision Models
CV and Pattern Recognition
Makes computer vision faster and better.
A Lightweight Convolution and Vision Transformer integrated model with Multi-scale Self-attention Mechanism
CV and Pattern Recognition
Makes computer vision faster and smarter.
A Lightweight Convolution and Vision Transformer integrated model with Multi-scale Self-attention Mechanism
CV and Pattern Recognition
Makes computers see better with less effort.