Score: 0

Estimating 2D Keypoints of Surgical Tools Using Vision-Language Models with Low-Rank Adaptation

Published: August 28, 2025 | arXiv ID: 2508.20830v1

By: Krit Duangprom, Tryphon Lambrou, Binod Bhattarai

Potential Business Impact:

Helps robots see and grab tiny surgical tools.

Business Areas:
Image Recognition Data and Analytics, Software

This paper presents a novel pipeline for 2D keypoint estima- tion of surgical tools by leveraging Vision Language Models (VLMs) fine- tuned using a low rank adjusting (LoRA) technique. Unlike traditional Convolutional Neural Network (CNN) or Transformer-based approaches, which often suffer from overfitting in small-scale medical datasets, our method harnesses the generalization capabilities of pre-trained VLMs. We carefully design prompts to create an instruction-tuning dataset and use them to align visual features with semantic keypoint descriptions. Experimental results show that with only two epochs of fine tuning, the adapted VLM outperforms the baseline models, demonstrating the ef- fectiveness of LoRA in low-resource scenarios. This approach not only improves keypoint detection performance, but also paves the way for future work in 3D surgical hands and tools pose estimation.

Country of Origin
🇬🇧 United Kingdom

Page Count
11 pages

Category
Computer Science:
CV and Pattern Recognition