Pole-Image: A Self-Supervised Pole-Anchored Descriptor for Long-Term LiDAR Localization and Map Maintenance
By: Wuhao Xie, Kanji Tanaka
Potential Business Impact:
Helps robots know where they are.
Long-term autonomy for mobile robots requires both robust self-localization and reliable map maintenance. Conventional landmark-based methods face a fundamental trade-off between landmarks with high detectability but low distinctiveness (e.g., poles) and those with high distinctiveness but difficult stable detection (e.g., local point cloud structures). This work addresses the challenge of descriptively identifying a unique "signature" (local point cloud) by leveraging a detectable, high-precision "anchor" (like a pole). To solve this, we propose a novel canonical representation, "Pole-Image," as a hybrid method that uses poles as anchors to generate signatures from the surrounding 3D structure. Pole-Image represents a pole-like landmark and its surrounding environment, detected from a LiDAR point cloud, as a 2D polar coordinate image with the pole itself as the origin. This representation leverages the pole's nature as a high-precision reference point, explicitly encoding the "relative geometry" between the stable pole and the variable surrounding point cloud. The key advantage of pole landmarks is that "detection" is extremely easy. This ease of detection allows the robot to easily track the same pole, enabling the automatic and large-scale collection of diverse observational data (positive pairs). This data acquisition feasibility makes "Contrastive Learning (CL)" applicable. By applying CL, the model learns a viewpoint-invariant and highly discriminative descriptor. The contributions are twofold: 1) The descriptor overcomes perceptual aliasing, enabling robust self-localization. 2) The high-precision encoding enables high-sensitivity change detection, contributing to map maintenance.
Similar Papers
DuLoc: Life-Long Dual-Layer Localization in Changing and Dynamic Expansive Scenarios
Robotics
Helps self-driving vehicles find their way anywhere.
A New Statistical Approach to the Performance Analysis of Vision-based Localization
CV and Pattern Recognition
Find your exact spot using cameras and distances.
OpenLiDARMap: Zero-Drift Point Cloud Mapping using Map Priors
Robotics
Helps robots find their way without GPS.