LLM-based ambiguity detection in natural language instructions for collaborative surgical robots
By: Ana Davila, Jacinto Colan, Yasuhisa Hasegawa
Potential Business Impact:
Helps robots understand surgery instructions better.
Ambiguity in natural language instructions poses significant risks in safety-critical human-robot interaction, particularly in domains such as surgery. To address this, we propose a framework that uses Large Language Models (LLMs) for ambiguity detection specifically designed for collaborative surgical scenarios. Our method employs an ensemble of LLM evaluators, each configured with distinct prompting techniques to identify linguistic, contextual, procedural, and critical ambiguities. A chain-of-thought evaluator is included to systematically analyze instruction structure for potential issues. Individual evaluator assessments are synthesized through conformal prediction, which yields non-conformity scores based on comparison to a labeled calibration dataset. Evaluating Llama 3.2 11B and Gemma 3 12B, we observed classification accuracy exceeding 60% in differentiating ambiguous from unambiguous surgical instructions. Our approach improves the safety and reliability of human-robot collaboration in surgery by offering a mechanism to identify potentially ambiguous instructions before robot action.
Similar Papers
Enhancing LLM Instruction Following: An Evaluation-Driven Multi-Agentic Workflow for Prompt Instructions Optimization
Artificial Intelligence
Makes AI follow rules better for correct answers.
Beyond Single Models: Enhancing LLM Detection of Ambiguity in Requests through Debate
Computation and Language
Makes AI understand confusing requests better.
Affordance-Based Disambiguation of Surgical Instructions for Collaborative Robot-Assisted Surgery
Robotics
Robot surgeon understands doctor's spoken commands better.