DFormerv2: Geometry Self-Attention for RGBD Semantic Segmentation
By: Bo-Wen Yin , Jiao-Long Cao , Ming-Ming Cheng and more
Potential Business Impact:
Helps computers see better in dark or bright light.
Recent advances in scene understanding benefit a lot from depth maps because of the 3D geometry information, especially in complex conditions (e.g., low light and overexposed). Existing approaches encode depth maps along with RGB images and perform feature fusion between them to enable more robust predictions. Taking into account that depth can be regarded as a geometry supplement for RGB images, a straightforward question arises: Do we really need to explicitly encode depth information with neural networks as done for RGB images? Based on this insight, in this paper, we investigate a new way to learn RGBD feature representations and present DFormerv2, a strong RGBD encoder that explicitly uses depth maps as geometry priors rather than encoding depth information with neural networks. Our goal is to extract the geometry clues from the depth and spatial distances among all the image patch tokens, which will then be used as geometry priors to allocate attention weights in self-attention. Extensive experiments demonstrate that DFormerv2 exhibits exceptional performance in various RGBD semantic segmentation benchmarks. Code is available at: https://github.com/VCIP-RGBD/DFormer.
Similar Papers
HDBFormer: Efficient RGB-D Semantic Segmentation with A Heterogeneous Dual-Branch Framework
CV and Pattern Recognition
Helps robots understand rooms using color and distance.
Vanishing Depth: A Depth Adapter with Positional Depth Encoding for Generalized Image Encoders
CV and Pattern Recognition
Helps robots see and understand distances better.
DepthMatch: Semi-Supervised RGB-D Scene Parsing through Depth-Guided Regularization
CV and Pattern Recognition
Teaches computers to understand scenes with less work.