Score: 2

DepthFocus: Controllable Depth Estimation for See-Through Scenes

Published: November 21, 2025 | arXiv ID: 2511.16993v1

By: Junhong Min , Jimin Kim , Cheol-Hui Min and more

BigTech Affiliations: Samsung

Potential Business Impact:

Lets computers see through glass like humans.

Business Areas:
Image Recognition Data and Analytics, Software

Depth in the real world is rarely singular. Transmissive materials create layered ambiguities that confound conventional perception systems. Existing models remain passive, attempting to estimate static depth maps anchored to the nearest surface, while humans actively shift focus to perceive a desired depth. We introduce DepthFocus, a steerable Vision Transformer that redefines stereo depth estimation as intent-driven control. Conditioned on a scalar depth preference, the model dynamically adapts its computation to focus on the intended depth, enabling selective perception within complex scenes. The training primarily leverages our newly constructed 500k multi-layered synthetic dataset, designed to capture diverse see-through effects. DepthFocus not only achieves state-of-the-art performance on conventional single-depth benchmarks like BOOSTER, a dataset notably rich in transparent and reflective objects, but also quantitatively demonstrates intent-aligned estimation on our newly proposed real and synthetic multi-depth datasets. Moreover, it exhibits strong generalization capabilities on unseen see-through scenes, underscoring its robustness as a significant step toward active and human-like 3D perception.

Country of Origin
🇰🇷 South Korea

Page Count
11 pages

Category
Computer Science:
CV and Pattern Recognition