MVSMamba: Multi-View Stereo with State Space Model
By: Jianfei Jiang , Qiankun Liu , Hongyuan Liu and more
Potential Business Impact:
Makes 3D models from pictures faster and better.
Robust feature representations are essential for learning-based Multi-View Stereo (MVS), which relies on accurate feature matching. Recent MVS methods leverage Transformers to capture long-range dependencies based on local features extracted by conventional feature pyramid networks. However, the quadratic complexity of Transformer-based MVS methods poses challenges to balance performance and efficiency. Motivated by the global modeling capability and linear complexity of the Mamba architecture, we propose MVSMamba, the first Mamba-based MVS network. MVSMamba enables efficient global feature aggregation with minimal computational overhead. To fully exploit Mamba's potential in MVS, we propose a Dynamic Mamba module (DM-module) based on a novel reference-centered dynamic scanning strategy, which enables: (1) Efficient intra- and inter-view feature interaction from the reference to source views, (2) Omnidirectional multi-view feature representations, and (3) Multi-scale global feature aggregation. Extensive experimental results demonstrate MVSMamba outperforms state-of-the-art MVS methods on the DTU dataset and the Tanks-and-Temples benchmark with both superior performance and efficiency. The source code is available at https://github.com/JianfeiJ/MVSMamba.
Similar Papers
VCMamba: Bridging Convolutions with Multi-Directional Mamba for Efficient Visual Representation
CV and Pattern Recognition
Helps computers see details and the big picture.
DefMamba: Deformable Visual State Space Model
CV and Pattern Recognition
Finds important parts of pictures better.
X-VMamba: Explainable Vision Mamba
CV and Pattern Recognition
Shows how computer vision "sees" medical images.