Geometrically Constrained and Token-Based Probabilistic Spatial Transformers
By: Johann Schmidt, Sebastian Stober
Potential Business Impact:
Helps computers see bugs better, even when tilted.
Fine-grained visual classification (FGVC) remains highly sensitive to geometric variability, where objects appear under arbitrary orientations, scales, and perspective distortions. While equivariant architectures address this issue, they typically require substantial computational resources and restrict the hypothesis space. We revisit Spatial Transformer Networks (STNs) as a canonicalization tool for transformer-based vision pipelines, emphasizing their flexibility, backbone-agnostic nature, and lack of architectural constraints. We propose a probabilistic, component-wise extension that improves robustness. Specifically, we decompose affine transformations into rotation, scaling, and shearing, and regress each component under geometric constraints using a shared localization encoder. To capture uncertainty, we model each component with a Gaussian variational posterior and perform sampling-based canonicalization during inference.A novel component-wise alignment loss leverages augmentation parameters to guide spatial alignment. Experiments on challenging moth classification benchmarks demonstrate that our method consistently improves robustness compared to other STNs.
Similar Papers
Quantized Visual Geometry Grounded Transformer
CV and Pattern Recognition
Makes 3D cameras faster and smaller.
Robust Visual Localization via Semantic-Guided Multi-Scale Transformer
CV and Pattern Recognition
Helps robots see where they are anywhere.
GFT: Gradient Focal Transformer
CV and Pattern Recognition
Helps computers see tiny differences in pictures.