URNet: Uncertainty-aware Refinement Network for Event-based Stereo Depth Estimation
By: Yifeng Cheng, Alois Knoll, Hu Cao
Potential Business Impact:
Helps cameras see depth better in tricky light.
Event cameras provide high temporal resolution, high dynamic range, and low latency, offering significant advantages over conventional frame-based cameras. In this work, we introduce an uncertainty-aware refinement network called URNet for event-based stereo depth estimation. Our approach features a local-global refinement module that effectively captures fine-grained local details and long-range global context. Additionally, we introduce a Kullback-Leibler (KL) divergence-based uncertainty modeling method to enhance prediction reliability. Extensive experiments on the DSEC dataset demonstrate that URNet consistently outperforms state-of-the-art (SOTA) methods in both qualitative and quantitative evaluations.
Similar Papers
DERD-Net: Learning Depth from Event-based Ray Densities
CV and Pattern Recognition
Helps cameras see depth in any light.
UM-Depth : Uncertainty Masked Self-Supervised Monocular Depth Estimation with Visual Odometry
CV and Pattern Recognition
Makes self-driving cars see better in tricky spots.
A Survey of 3D Reconstruction with Event Cameras
CV and Pattern Recognition
Helps robots see in fast, dark, or bright places.