Estimation of Tissue Deformation and Interactive Force in Robotic Surgery through Vision-based Learning
By: Srikar Annamraju , Yuxi Chen , Jooyoung Lim and more
Potential Business Impact:
Helps surgeons feel and see inside bodies better.
Goal: A limitation in robotic surgery is the lack of force feedback, due to challenges in suitable sensing techniques. To enhance the perception of the surgeons and precise force rendering, estimation of these forces along with tissue deformation level is presented here. Methods: An experimental test bed is built for studying the interaction, and the forces are estimated from the raw data. Since tissue deformation and stiffness are non-linearly related, they are independently computed for enhanced reliability. A Convolutional Neural Network (CNN) based vision model is deployed, and both classification and regression models are developed. Results: The forces applied on the tissue are estimated, and the tissue is classified based on its deformation. The exact deformation of the tissue is also computed. Conclusions: The surgeons can render precise forces and detect tumors using the proposed method. The rarely discussed efficacy of computing the deformation level is also demonstrated.
Similar Papers
Learning User Interaction Forces using Vision for a Soft Finger Exosuit
Robotics
Lets robots feel where they touch skin.
Tracking-Aware Deformation Field Estimation for Non-rigid 3D Reconstruction in Robotic Surgeries
CV and Pattern Recognition
Robots see and measure tissue changes during surgery.
Toward Reliable AR-Guided Surgical Navigation: Interactive Deformation Modeling with Data-Driven Biomechanics and Prompts
CV and Pattern Recognition
Helps surgeons see inside bodies better.