Bridging Spectral-wise and Multi-spectral Depth Estimation via Geometry-guided Contrastive Learning
By: Ukcheol Shin, Kyunghyun Lee, Jean Oh
Potential Business Impact:
Helps self-driving cars see better in any weather.
Deploying depth estimation networks in the real world requires high-level robustness against various adverse conditions to ensure safe and reliable autonomy. For this purpose, many autonomous vehicles employ multi-modal sensor systems, including an RGB camera, NIR camera, thermal camera, LiDAR, or Radar. They mainly adopt two strategies to use multiple sensors: modality-wise and multi-modal fused inference. The former method is flexible but memory-inefficient, unreliable, and vulnerable. Multi-modal fusion can provide high-level reliability, yet it needs a specialized architecture. In this paper, we propose an effective solution, named align-and-fuse strategy, for the depth estimation from multi-spectral images. In the align stage, we align embedding spaces between multiple spectrum bands to learn shareable representation across multi-spectral images by minimizing contrastive loss of global and spatially aligned local features with geometry cue. After that, in the fuse stage, we train an attachable feature fusion module that can selectively aggregate the multi-spectral features for reliable and robust prediction results. Based on the proposed method, a single-depth network can achieve both spectral-invariant and multi-spectral fused depth estimation while preserving reliability, memory efficiency, and flexibility.
Similar Papers
Language-Depth Navigated Thermal and Visible Image Fusion
CV and Pattern Recognition
Makes robots see better in the dark.
DepthFusion: Depth-Aware Hybrid Feature Fusion for LiDAR-Camera 3D Object Detection
CV and Pattern Recognition
Helps self-driving cars see better in 3D.
Multimodal and Multiview Deep Fusion for Autonomous Marine Navigation
CV and Pattern Recognition
Helps boats see better in fog and storms.