Towards High-Precision Depth Sensing via Monocular-Aided iToF and RGB Integration
By: Yansong Du , Yutong Deng , Yuting Zhou and more
Potential Business Impact:
Makes blurry depth pictures sharp and clear.
This paper presents a novel iToF-RGB fusion framework designed to address the inherent limitations of indirect Time-of-Flight (iToF) depth sensing, such as low spatial resolution, limited field-of-view (FoV), and structural distortion in complex scenes. The proposed method first reprojects the narrow-FoV iToF depth map onto the wide-FoV RGB coordinate system through a precise geometric calibration and alignment module, ensuring pixel-level correspondence between modalities. A dual-encoder fusion network is then employed to jointly extract complementary features from the reprojected iToF depth and RGB image, guided by monocular depth priors to recover fine-grained structural details and perform depth super-resolution. By integrating cross-modal structural cues and depth consistency constraints, our approach achieves enhanced depth accuracy, improved edge sharpness, and seamless FoV expansion. Extensive experiments on both synthetic and real-world datasets demonstrate that the proposed framework significantly outperforms state-of-the-art methods in terms of accuracy, structural consistency, and visual quality.
Similar Papers
Self-Supervised Enhancement for Depth from a Lightweight ToF Sensor with Monocular Images
CV and Pattern Recognition
Makes blurry 3D pictures sharp and clear.
RGB-Thermal Infrared Fusion for Robust Depth Estimation in Complex Environments
Image and Video Processing
Helps cars see in dark or bad weather.
DEPTHOR: Depth Enhancement from a Practical Light-Weight dToF Sensor and RGB Image
CV and Pattern Recognition
Makes 3D pictures from blurry light sensors.