Accelerating Vision Transformers with Adaptive Patch Sizes
By: Rohan Choudhury , JungEun Kim , Jinhyung Park and more
Potential Business Impact:
Makes computer vision faster by changing picture piece sizes.
Vision Transformers (ViTs) partition input images into uniformly sized patches regardless of their content, resulting in long input sequence lengths for high-resolution images. We present Adaptive Patch Transformers (APT), which addresses this by using multiple different patch sizes within the same image. APT reduces the total number of input tokens by allocating larger patch sizes in more homogeneous areas and smaller patches in more complex ones. APT achieves a drastic speedup in ViT inference and training, increasing throughput by 40% on ViT-L and 50% on ViT-H while maintaining downstream performance, and can be applied to a previously fine-tuned ViT, converging in as little as 1 epoch. It also significantly reduces training and inference time without loss of performance in high-resolution dense visual tasks, achieving up to 30\% faster training and inference in visual QA, object detection, and semantic segmentation.
Similar Papers
Vision Transformers: the threat of realistic adversarial patches
CV and Pattern Recognition
Tricks AI into seeing people when they aren't there.
Alias-Free ViT: Fractional Shift Invariance via Linear Attention
CV and Pattern Recognition
Makes computer vision better at seeing small changes.
CascadedViT: Cascaded Chunk-FeedForward and Cascaded Group Attention Vision Transformer
CV and Pattern Recognition
Makes AI see better using less power.