Vanishing Depth: A Depth Adapter with Positional Depth Encoding for Generalized Image Encoders
By: Paul Koch , Jörg Krüger , Ankit Chowdhury and more
Potential Business Impact:
Helps robots see and understand distances better.
Generalized metric depth understanding is critical for precise vision-guided robotics, which current state-of-the-art (SOTA) vision-encoders do not support. To address this, we propose Vanishing Depth, a self-supervised training approach that extends pretrained RGB encoders to incorporate and align metric depth into their feature embeddings. Based on our novel positional depth encoding, we enable stable depth density and depth distribution invariant feature extraction. We achieve performance improvements and SOTA results across a spectrum of relevant RGBD downstream tasks - without the necessity of finetuning the encoder. Most notably, we achieve 56.05 mIoU on SUN-RGBD segmentation, 88.3 RMSE on Void's depth completion, and 83.8 Top 1 accuracy on NYUv2 scene classification. In 6D-object pose estimation, we outperform our predecessors of DinoV2, EVA-02, and Omnivore and achieve SOTA results for non-finetuned encoders in several related RGBD downstream tasks.
Similar Papers
DFormerv2: Geometry Self-Attention for RGBD Semantic Segmentation
CV and Pattern Recognition
Helps computers see better in dark or bright light.
Deep Neural Networks for Accurate Depth Estimation with Latent Space Features
CV and Pattern Recognition
Makes robots see in 3D better with one camera.
Depth Anything 3: Recovering the Visual Space from Any Views
CV and Pattern Recognition
Lets computers see 3D shapes from pictures.