Evaluating Large Vision-language Models for Surgical Tool Detection
By: Nakul Poudel, Richard Simon, Cristian A. Linte
Potential Business Impact:
AI helps surgeons find tools during operations.
Surgery is a highly complex process, and artificial intelligence has emerged as a transformative force in supporting surgical guidance and decision-making. However, the unimodal nature of most current AI systems limits their ability to achieve a holistic understanding of surgical workflows. This highlights the need for general-purpose surgical AI systems capable of comprehensively modeling the interrelated components of surgical scenes. Recent advances in large vision-language models that integrate multimodal data processing offer strong potential for modeling surgical tasks and providing human-like scene reasoning and understanding. Despite their promise, systematic investigations of VLMs in surgical applications remain limited. In this study, we evaluate the effectiveness of large VLMs for the fundamental surgical vision task of detecting surgical tools. Specifically, we investigate three state-of-the-art VLMs, Qwen2.5, LLaVA1.5, and InternVL3.5, on the GraSP robotic surgery dataset under both zero-shot and parameter-efficient LoRA fine-tuning settings. Our results demonstrate that Qwen2.5 consistently achieves superior detection performance in both configurations among the evaluated VLMs. Furthermore, compared with the open-set detection baseline Grounding DINO, Qwen2.5 exhibits stronger zero-shot generalization and comparable fine-tuned performance. Notably, Qwen2.5 shows superior instrument recognition, while Grounding DINO demonstrates stronger localization.
Similar Papers
Systematic Evaluation of Large Vision-Language Models for Surgical Artificial Intelligence
CV and Pattern Recognition
AI helps doctors understand surgery better.
SurgVLM: A Large Vision-Language Model and Systematic Evaluation Benchmark for Surgical Intelligence
CV and Pattern Recognition
Helps surgeons by understanding surgery videos.
SurgXBench: Explainable Vision-Language Model Benchmark for Surgery
CV and Pattern Recognition
Helps robot surgeons see and understand actions.