BoRe-Depth: Self-supervised Monocular Depth Estimation with Boundary Refinement for Embedded Systems
By: Chang Liu , Juan Li , Sheng Zhang and more
Potential Business Impact:
Helps robots see in 3D with clear edges.
Depth estimation is one of the key technologies for realizing 3D perception in unmanned systems. Monocular depth estimation has been widely researched because of its low-cost advantage, but the existing methods face the challenges of poor depth estimation performance and blurred object boundaries on embedded systems. In this paper, we propose a novel monocular depth estimation model, BoRe-Depth, which contains only 8.7M parameters. It can accurately estimate depth maps on embedded systems and significantly improves boundary quality. Firstly, we design an Enhanced Feature Adaptive Fusion Module (EFAF) which adaptively fuses depth features to enhance boundary detail representation. Secondly, we integrate semantic knowledge into the encoder to improve the object recognition and boundary perception capabilities. Finally, BoRe-Depth is deployed on NVIDIA Jetson Orin, and runs efficiently at 50.7 FPS. We demonstrate that the proposed model significantly outperforms previous lightweight models on multiple challenging datasets, and we provide detailed ablation studies for the proposed methods. The code is available at https://github.com/liangxiansheng093/BoRe-Depth.
Similar Papers
BokehDepth: Enhancing Monocular Depth Estimation through Bokeh Generation
CV and Pattern Recognition
Makes blurry photos show depth better.
UM-Depth : Uncertainty Masked Self-Supervised Monocular Depth Estimation with Visual Odometry
CV and Pattern Recognition
Makes self-driving cars see better in tricky spots.
RTS-Mono: A Real-Time Self-Supervised Monocular Depth Estimation Method for Real-World Deployment
CV and Pattern Recognition
Helps cars see how far things are, fast.