Score: 0

Flexible and Efficient Spatio-Temporal Transformer for Sequential Visual Place Recognition

Published: October 5, 2025 | arXiv ID: 2510.04282v1

By: Yu Kiu , Lau , Chao Chen and more

Potential Business Impact:

Helps robots remember places faster and with less memory.

Business Areas:
Image Recognition Data and Analytics, Software

Sequential Visual Place Recognition (Seq-VPR) leverages transformers to capture spatio-temporal features effectively; however, existing approaches prioritize performance at the expense of flexibility and efficiency. In practice, a transformer-based Seq-VPR model should be flexible to the number of frames per sequence (seq-length), deliver fast inference, and have low memory usage to meet real-time constraints. To our knowledge, no existing transformer-based Seq-VPR method achieves both flexibility and efficiency. To address this gap, we propose Adapt-STformer, a Seq-VPR method built around our novel Recurrent Deformable Transformer Encoder (Recurrent-DTE), which uses an iterative recurrent mechanism to fuse information from multiple sequential frames. This design naturally supports variable seq-lengths, fast inference, and low memory usage. Experiments on the Nordland, Oxford, and NuScenes datasets show that Adapt-STformer boosts recall by up to 17% while reducing sequence extraction time by 36% and lowering memory usage by 35% compared to the second-best baseline.

Page Count
8 pages

Category
Computer Science:
CV and Pattern Recognition