SPoT: Subpixel Placement of Tokens in Vision Transformers
By: Martine Hjelkrem-Tan , Marius Aasan , Gabriel Y. Arteaga and more
Potential Business Impact:
Lets computers see details better with fewer parts.
Vision Transformers naturally accommodate sparsity, yet standard tokenization methods confine features to discrete patch grids. This constraint prevents models from fully exploiting sparse regimes, forcing awkward compromises. We propose Subpixel Placement of Tokens (SPoT), a novel tokenization strategy that positions tokens continuously within images, effectively sidestepping grid-based limitations. With our proposed oracle-guided search, we uncover substantial performance gains achievable with ideal subpixel token positioning, drastically reducing the number of tokens necessary for accurate predictions during inference. SPoT provides a new direction for flexible, efficient, and interpretable ViT architectures, redefining sparsity as a strategic advantage rather than an imposed limitation.
Similar Papers
SPOT: Sparsification with Attention Dynamics via Token Relevance in Vision Transformers
CV and Pattern Recognition
Makes computer vision faster by removing unneeded parts.
Make Your Training Flexible: Towards Deployment-Efficient Video Models
CV and Pattern Recognition
Makes videos train faster, using less data.
Overcoming Vocabulary Constraints with Pixel-level Fallback
Computation and Language
Helps computers understand any language, even new ones.