From Monocular Vision to Autonomous Action: Guiding Tumor Resection via 3D Reconstruction
By: Ayberk Acar , Mariana Smith , Lidia Al-Zogbi and more
Potential Business Impact:
Helps robots see inside bodies for surgery.
Surgical automation requires precise guidance and understanding of the scene. Current methods in the literature rely on bulky depth cameras to create maps of the anatomy, however this does not translate well to space-limited clinical applications. Monocular cameras are small and allow minimally invasive surgeries in tight spaces but additional processing is required to generate 3D scene understanding. We propose a 3D mapping pipeline that uses only RGB images to create segmented point clouds of the target anatomy. To ensure the most precise reconstruction, we compare different structure from motion algorithms' performance on mapping the central airway obstructions, and test the pipeline on a downstream task of tumor resection. In several metrics, including post-procedure tissue model evaluation, our pipeline performs comparably to RGB-D cameras and, in some cases, even surpasses their performance. These promising results demonstrate that automation guidance can be achieved in minimally invasive procedures with monocular cameras. This study is a step toward the complete autonomy of surgical robots.
Similar Papers
Mono3R: Exploiting Monocular Cues for Geometric 3D Reconstruction
CV and Pattern Recognition
Makes 3D pictures from photos better.
3D Mapping Using a Lightweight and Low-Power Monocular Camera Embedded inside a Gripper of Limbed Climbing Robots
Robotics
Lets robots climb walls using just one camera.
TumorMap: A Laser-based Surgical Platform for 3D Tumor Mapping and Fully-Automated Tumor Resection
Robotics
Robot cuts out tumors precisely with lasers.