SpatiaLoc: Leveraging Multi-Level Spatial Enhanced Descriptors for Cross-Modal Localization
By: Tianyi Shang , Pengjie Xu , Zhaojun Deng and more
Potential Business Impact:
Robots find places using words and 3D maps.
Cross-modal localization using text and point clouds enables robots to localize themselves via natural language descriptions, with applications in autonomous navigation and interaction between humans and robots. In this task, objects often recur across text and point clouds, making spatial relationships the most discriminative cues for localization. Given this characteristic, we present SpatiaLoc, a framework utilizing a coarse-to-fine strategy that emphasizes spatial relationships at both the instance and global levels. In the coarse stage, we introduce a Bezier Enhanced Object Spatial Encoder (BEOSE) that models spatial relationships at the instance level using quadratic Bezier curves. Additionally, a Frequency Aware Encoder (FAE) generates spatial representations in the frequency domain at the global level. In the fine stage, an Uncertainty Aware Gaussian Fine Localizer (UGFL) regresses 2D positions by modeling predictions as Gaussian distributions with a loss function aware of uncertainty. Extensive experiments on KITTI360Pose demonstrate that SpatiaLoc significantly outperforms existing state-of-the-art (SOTA) methods.
Similar Papers
SMGeo: Cross-View Object Geo-Localization with Grid-Level Mixture-of-Experts
CV and Pattern Recognition
Find objects in satellite photos from drone pictures.
SpatialGeo:Boosting Spatial Reasoning in Multimodal LLMs via Geometry-Semantics Fusion
CV and Pattern Recognition
Helps computers understand 3D shapes and where things are.
Text-Driven 3D Lidar Place Recognition for Autonomous Driving
CV and Pattern Recognition
Helps robots find places using descriptions.