Depth Anything at Any Condition
By: Boyuan Sun , Modi Jin , Bowen Yin and more
Potential Business Impact:
Helps computers see depth in any weather.
We present Depth Anything at Any Condition (DepthAnything-AC), a foundation monocular depth estimation (MDE) model capable of handling diverse environmental conditions. Previous foundation MDE models achieve impressive performance across general scenes but not perform well in complex open-world environments that involve challenging conditions, such as illumination variations, adverse weather, and sensor-induced distortions. To overcome the challenges of data scarcity and the inability of generating high-quality pseudo-labels from corrupted images, we propose an unsupervised consistency regularization finetuning paradigm that requires only a relatively small amount of unlabeled data. Furthermore, we propose the Spatial Distance Constraint to explicitly enforce the model to learn patch-level relative relationships, resulting in clearer semantic boundaries and more accurate details. Experimental results demonstrate the zero-shot capabilities of DepthAnything-AC across diverse benchmarks, including real-world adverse weather benchmarks, synthetic corruption benchmarks, and general benchmarks. Project Page: https://ghost233lism.github.io/depthanything-AC-page Code: https://github.com/HVision-NKU/DepthAnythingAC
Similar Papers
Depth Anything with Any Prior
CV and Pattern Recognition
Makes any picture show how far away things are.
Always Clear Depth: Robust Monocular Depth Estimation under Adverse Weather
CV and Pattern Recognition
Helps self-driving cars see in bad weather.
Video Depth Anything: Consistent Depth Estimation for Super-Long Videos
CV and Pattern Recognition
Makes videos show depth accurately for a long time.