Monte Carlo Diffusion for Generalizable Learning-Based RANSAC
By: Jiale Wang , Chen Zhao , Wei Ke and more
Potential Business Impact:
Makes computer vision work better with messy pictures.
Random Sample Consensus (RANSAC) is a fundamental approach for robustly estimating parametric models from noisy data. Existing learning-based RANSAC methods utilize deep learning to enhance the robustness of RANSAC against outliers. However, these approaches are trained and tested on the data generated by the same algorithms, leading to limited generalization to out-of-distribution data during inference. Therefore, in this paper, we introduce a novel diffusion-based paradigm that progressively injects noise into ground-truth data, simulating the noisy conditions for training learning-based RANSAC. To enhance data diversity, we incorporate Monte Carlo sampling into the diffusion paradigm, approximating diverse data distributions by introducing different types of randomness at multiple stages. We evaluate our approach in the context of feature matching through comprehensive experiments on the ScanNet and MegaDepth datasets. The experimental results demonstrate that our Monte Carlo diffusion mechanism significantly improves the generalization ability of learning-based RANSAC. We also develop extensive ablation studies that highlight the effectiveness of key components in our framework.
Similar Papers
RANSAC Revisited: An Improved Algorithm for Robust Subspace Recovery under Adversarial and Noisy Corruptions
Machine Learning (CS)
Cleans messy data even with sneaky tricks.
Robust Representation Consistency Model via Contrastive Denoising
CV and Pattern Recognition
Makes AI smarter and faster at recognizing images.
Fixing the RANSAC Stopping Criterion
CV and Pattern Recognition
Fixes computer vision to find better patterns.