Score: 0

Accelerating Vision Transformers with Adaptive Patch Sizes

Published: October 20, 2025 | arXiv ID: 2510.18091v1

By: Rohan Choudhury , JungEun Kim , Jinhyung Park and more

Potential Business Impact:

Makes computer vision faster by changing picture piece sizes.

Business Areas:
Image Recognition Data and Analytics, Software

Vision Transformers (ViTs) partition input images into uniformly sized patches regardless of their content, resulting in long input sequence lengths for high-resolution images. We present Adaptive Patch Transformers (APT), which addresses this by using multiple different patch sizes within the same image. APT reduces the total number of input tokens by allocating larger patch sizes in more homogeneous areas and smaller patches in more complex ones. APT achieves a drastic speedup in ViT inference and training, increasing throughput by 40% on ViT-L and 50% on ViT-H while maintaining downstream performance, and can be applied to a previously fine-tuned ViT, converging in as little as 1 epoch. It also significantly reduces training and inference time without loss of performance in high-resolution dense visual tasks, achieving up to 30\% faster training and inference in visual QA, object detection, and semantic segmentation.

Page Count
19 pages

Category
Computer Science:
CV and Pattern Recognition