Equi-ViT: Rotational Equivariant Vision Transformer for Robust Histopathology Analysis
By: Fuyao Chen , Yuexi Du , Elèonore V. Lieffrig and more
Vision Transformers (ViTs) have gained rapid adoption in computational pathology for their ability to model long-range dependencies through self-attention, addressing the limitations of convolutional neural networks that excel at local pattern capture but struggle with global contextual reasoning. Recent pathology-specific foundation models have further advanced performance by leveraging large-scale pretraining. However, standard ViTs remain inherently non-equivariant to transformations such as rotations and reflections, which are ubiquitous variations in histopathology imaging. To address this limitation, we propose Equi-ViT, which integrates an equivariant convolution kernel into the patch embedding stage of a ViT architecture, imparting built-in rotational equivariance to learned representations. Equi-ViT achieves superior rotation-consistent patch embeddings and stable classification performance across image orientations. Our results on a public colorectal cancer dataset demonstrate that incorporating equivariant patch embedding enhances data efficiency and robustness, suggesting that equivariant transformers could potentially serve as more generalizable backbones for the application of ViT in histopathology, such as digital pathology foundation models.
Similar Papers
Stronger ViTs With Octic Equivariance
CV and Pattern Recognition
Makes computer vision faster and better.
HistoViT: Vision Transformer for Accurate and Scalable Histopathological Cancer Diagnosis
Image and Video Processing
Helps doctors find cancer faster and more accurately.
ECViT: Efficient Convolutional Vision Transformer with Local-Attention and Multi-scale Stages
CV and Pattern Recognition
Makes AI see pictures faster and better.