TOPO-Bench: An Open-Source Topological Mapping Evaluation Framework with Quantifiable Perceptual Aliasing
By: Jiaming Wang , Diwen Liu , Jizhuo Chen and more
Potential Business Impact:
Helps robots map places more accurately and reliably.
Topological mapping offers a compact and robust representation for navigation, but progress in the field is hindered by the lack of standardized evaluation metrics, datasets, and protocols. Existing systems are assessed using different environments and criteria, preventing fair and reproducible comparisons. Moreover, a key challenge - perceptual aliasing - remains under-quantified, despite its strong influence on system performance. We address these gaps by (1) formalizing topological consistency as the fundamental property of topological maps and showing that localization accuracy provides an efficient and interpretable surrogate metric, and (2) proposing the first quantitative measure of dataset ambiguity to enable fair comparisons across environments. To support this protocol, we curate a diverse benchmark dataset with calibrated ambiguity levels, implement and release deep-learned baseline systems, and evaluate them alongside classical methods. Our experiments and analysis yield new insights into the limitations of current approaches under perceptual aliasing. All datasets, baselines, and evaluation tools are fully open-sourced to foster consistent and reproducible research in topological mapping.
Similar Papers
When Annotators Disagree, Topology Explains: Mapper, a Topological Tool for Exploring Text Embedding Geometry and Ambiguity
Computation and Language
Shows how computers understand tricky words.
Topological Metric for Unsupervised Embedding Quality Evaluation
Machine Learning (CS)
Measures how well computer "brains" learn without teachers.
CSMapping: Scalable Crowdsourced Semantic Mapping and Topology Inference for Autonomous Driving
CV and Pattern Recognition
Makes self-driving cars map roads better.