CoMatcher: Multi-View Collaborative Feature Matching
By: Jintao Zhang , Zimin Xia , Mingyue Dong and more
Potential Business Impact:
Helps computers see objects from many angles.
This paper proposes a multi-view collaborative matching strategy for reliable track construction in complex scenarios. We observe that the pairwise matching paradigms applied to image set matching often result in ambiguous estimation when the selected independent pairs exhibit significant occlusions or extreme viewpoint changes. This challenge primarily stems from the inherent uncertainty in interpreting intricate 3D structures based on limited two-view observations, as the 3D-to-2D projection leads to significant information loss. To address this, we introduce CoMatcher, a deep multi-view matcher to (i) leverage complementary context cues from different views to form a holistic 3D scene understanding and (ii) utilize cross-view projection consistency to infer a reliable global solution. Building on CoMatcher, we develop a groupwise framework that fully exploits cross-view relationships for large-scale matching tasks. Extensive experiments on various complex scenarios demonstrate the superiority of our method over the mainstream two-view matching paradigm.
Similar Papers
Dense Match Summarization for Faster Two-view Estimation
CV and Pattern Recognition
Finds object positions much faster.
Mono3R: Exploiting Monocular Cues for Geometric 3D Reconstruction
CV and Pattern Recognition
Makes 3D pictures from photos better.
SegMASt3R: Geometry Grounded Segment Matching
CV and Pattern Recognition
Matches parts of pictures from far away.