3D-Aware Multi-Task Learning with Cross-View Correlations for Dense Scene Understanding
By: Xiaoye Wang , Chen Tang , Xiangyu Yue and more
Potential Business Impact:
Helps computers understand 3D scenes from many pictures.
This paper addresses the challenge of training a single network to jointly perform multiple dense prediction tasks, such as segmentation and depth estimation, i.e., multi-task learning (MTL). Current approaches mainly capture cross-task relations in the 2D image space, often leading to unstructured features lacking 3D-awareness. We argue that 3D-awareness is vital for modeling cross-task correlations essential for comprehensive scene understanding. We propose to address this problem by integrating correlations across views, i.e., cost volume, as geometric consistency in the MTL network. Specifically, we introduce a lightweight Cross-view Module (CvM), shared across tasks, to exchange information across views and capture cross-view correlations, integrated with a feature from MTL encoder for multi-task predictions. This module is architecture-agnostic and can be applied to both single and multi-view data. Extensive results on NYUv2 and PASCAL-Context demonstrate that our method effectively injects geometric consistency into existing MTL methods to improve performance.
Similar Papers
A Survey on Deep Multi-Task Learning in Connected Autonomous Vehicles
Robotics
Helps self-driving cars see and predict better.
C3Po: Cross-View Cross-Modality Correspondence by Pointmap Prediction
CV and Pattern Recognition
Helps computers match photos to building blueprints.
MuM: Multi-View Masked Image Modeling for 3D Vision
CV and Pattern Recognition
Teaches computers to understand 3D from many pictures.