Flexible and Efficient Spatio-Temporal Transformer for Sequential Visual Place Recognition
By: Yu Kiu , Lau , Chao Chen and more
Potential Business Impact:
Helps robots remember places faster and with less memory.
Sequential Visual Place Recognition (Seq-VPR) leverages transformers to capture spatio-temporal features effectively; however, existing approaches prioritize performance at the expense of flexibility and efficiency. In practice, a transformer-based Seq-VPR model should be flexible to the number of frames per sequence (seq-length), deliver fast inference, and have low memory usage to meet real-time constraints. To our knowledge, no existing transformer-based Seq-VPR method achieves both flexibility and efficiency. To address this gap, we propose Adapt-STformer, a Seq-VPR method built around our novel Recurrent Deformable Transformer Encoder (Recurrent-DTE), which uses an iterative recurrent mechanism to fuse information from multiple sequential frames. This design naturally supports variable seq-lengths, fast inference, and low memory usage. Experiments on the Nordland, Oxford, and NuScenes datasets show that Adapt-STformer boosts recall by up to 17% while reducing sequence extraction time by 36% and lowering memory usage by 35% compared to the second-best baseline.
Similar Papers
TeTRA-VPR: A Ternary Transformer Approach for Compact Visual Place Recognition
CV and Pattern Recognition
Makes robots see and remember places better, faster.
DSFormer: A Dual-Scale Cross-Learning Transformer for Visual Place Recognition
CV and Pattern Recognition
Helps robots find their way in new places.
EmbodiedPlace: Learning Mixture-of-Features with Embodied Constraints for Visual Place Recognition
CV and Pattern Recognition
Helps robots remember where they've been.