CoMViT: An Efficient Vision Backbone for Supervised Classification in Medical Imaging
By: Aon Safdar, Mohamed Saadeldin
Potential Business Impact:
Makes AI see medical pictures better with less power.
Vision Transformers (ViTs) have demonstrated strong potential in medical imaging; however, their high computational demands and tendency to overfit on small datasets limit their applicability in real-world clinical scenarios. In this paper, we present CoMViT, a compact and generalizable Vision Transformer architecture optimized for resource-constrained medical image analysis. CoMViT integrates a convolutional tokenizer, diagonal masking, dynamic temperature scaling, and pooling-based sequence aggregation to improve performance and generalization. Through systematic architectural optimization, CoMViT achieves robust performance across twelve MedMNIST datasets while maintaining a lightweight design with only ~4.5M parameters. It matches or outperforms deeper CNN and ViT variants, offering up to 5-20x parameter reduction without sacrificing accuracy. Qualitative Grad-CAM analyses show that CoMViT consistently attends to clinically relevant regions despite its compact size. These results highlight the potential of principled ViT redesign for developing efficient and interpretable models in low-resource medical imaging settings.
Similar Papers
CoCAViT: Compact Vision Transformer with Robust Global Coordination
CV and Pattern Recognition
Makes small computer vision models work better everywhere.
ECViT: Efficient Convolutional Vision Transformer with Local-Attention and Multi-scale Stages
CV and Pattern Recognition
Makes AI see pictures faster and better.
A Lightweight Convolution and Vision Transformer integrated model with Multi-scale Self-attention Mechanism
CV and Pattern Recognition
Makes computers see better with less effort.