Depth Anything 3: Recovering the Visual Space from Any Views
By: Haotong Lin , Sili Chen , Junhao Liew and more
Potential Business Impact:
Lets computers see 3D shapes from pictures.
We present Depth Anything 3 (DA3), a model that predicts spatially consistent geometry from an arbitrary number of visual inputs, with or without known camera poses. In pursuit of minimal modeling, DA3 yields two key insights: a single plain transformer (e.g., vanilla DINO encoder) is sufficient as a backbone without architectural specialization, and a singular depth-ray prediction target obviates the need for complex multi-task learning. Through our teacher-student training paradigm, the model achieves a level of detail and generalization on par with Depth Anything 2 (DA2). We establish a new visual geometry benchmark covering camera pose estimation, any-view geometry and visual rendering. On this benchmark, DA3 sets a new state-of-the-art across all tasks, surpassing prior SOTA VGGT by an average of 44.3% in camera pose accuracy and 25.1% in geometric accuracy. Moreover, it outperforms DA2 in monocular depth estimation. All models are trained exclusively on public academic datasets.
Similar Papers
Describe Anything Anywhere At Any Moment
CV and Pattern Recognition
Robots understand and remember everything they see.
Online Video Depth Anything: Temporally-Consistent Depth Prediction with Low Memory Consumption
CV and Pattern Recognition
Lets cameras understand 3D depth in real-time.
Depth AnyEvent: A Cross-Modal Distillation Paradigm for Event-Based Monocular Depth Estimation
CV and Pattern Recognition
Helps cameras see depth in fast, dim light.