Multi-Modal Gesture Recognition from Video and Surgical Tool Pose Information via Motion Invariants
By: Jumanh Atoum , Garrison L. H. Johnston , Nabil Simaan and more
Potential Business Impact:
Helps robots learn to perform surgery better.
Recognizing surgical gestures in real-time is a stepping stone towards automated activity recognition, skill assessment, intra-operative assistance, and eventually surgical automation. The current robotic surgical systems provide us with rich multi-modal data such as video and kinematics. While some recent works in multi-modal neural networks learn the relationships between vision and kinematics data, current approaches treat kinematics information as independent signals, with no underlying relation between tool-tip poses. However, instrument poses are geometrically related, and the underlying geometry can aid neural networks in learning gesture representation. Therefore, we propose combining motion invariant measures (curvature and torsion) with vision and kinematics data using a relational graph network to capture the underlying relations between different data streams. We show that gesture recognition improves when combining invariant signals with tool position, achieving 90.3\% frame-wise accuracy on the JIGSAWS suturing dataset. Our results show that motion invariant signals coupled with position are better representations of gesture motion compared to traditional position and quaternion representations. Our results highlight the need for geometric-aware modeling of kinematics for gesture recognition.
Similar Papers
Efficient Surgical Robotic Instrument Pose Reconstruction in Real World Conditions Using Unified Feature Detection
Robotics
Helps robot arms see and move precisely.
Gaze-Guided 3D Hand Motion Prediction for Detecting Intent in Egocentric Grasping Tasks
CV and Pattern Recognition
Helps robots guess what hand you'll move next.
Differentiable Rendering-based Pose Estimation for Surgical Robotic Instruments
Robotics
Makes robot surgery tools know their exact spot.