EndoMUST: Monocular Depth Estimation for Robotic Endoscopy via End-to-end Multi-step Self-supervised Training
By: Liangjing Shao , Linxin Bai , Chenkang Du and more
Potential Business Impact:
Helps tiny cameras see inside bodies better.
Monocular depth estimation and ego-motion estimation are significant tasks for scene perception and navigation in stable, accurate and efficient robot-assisted endoscopy. To tackle lighting variations and sparse textures in endoscopic scenes, multiple techniques including optical flow, appearance flow and intrinsic image decomposition have been introduced into the existing methods. However, the effective training strategy for multiple modules are still critical to deal with both illumination issues and information interference for self-supervised depth estimation in endoscopy. Therefore, a novel framework with multistep efficient finetuning is proposed in this work. In each epoch of end-to-end training, the process is divided into three steps, including optical flow registration, multiscale image decomposition and multiple transformation alignments. At each step, only the related networks are trained without interference of irrelevant information. Based on parameter-efficient finetuning on the foundation model, the proposed method achieves state-of-the-art performance on self-supervised depth estimation on SCARED dataset and zero-shot depth estimation on Hamlyn dataset, with 4\%$\sim$10\% lower error. The evaluation code of this work has been published on https://github.com/BaymaxShao/EndoMUST.
Similar Papers
EndoGeDE: Generalizable Monocular Depth Estimation with Mixture of Low-Rank Experts for Diverse Endoscopic Scenes
CV and Pattern Recognition
Helps doctors see inside bodies better.
Occlusion-Aware Self-Supervised Monocular Depth Estimation for Weak-Texture Endoscopic Images
CV and Pattern Recognition
Helps doctors see inside bodies better.
EndoUFM: Utilizing Foundation Models for Monocular depth estimation of endoscopic images
CV and Pattern Recognition
Helps doctors see inside bodies better.