Open-Vocabulary 3D Instruction Ambiguity Detection
By: Jiayu Ding, Haoran Tang, Ge Li
Potential Business Impact:
Helps robots understand unclear instructions in 3D.
In safety-critical domains, linguistic ambiguity can have severe consequences; a vague command like "Pass me the vial" in a surgical setting could lead to catastrophic errors. Yet, most embodied AI research overlooks this, assuming instructions are clear and focusing on execution rather than confirmation. To address this critical safety gap, we are the first to define Open-Vocabulary 3D Instruction Ambiguity Detection, a fundamental new task where a model must determine if a command has a single, unambiguous meaning within a given 3D scene. To support this research, we build Ambi3D, the large-scale benchmark for this task, featuring over 700 diverse 3D scenes and around 22k instructions. Our analysis reveals a surprising limitation: state-of-the-art 3D Large Language Models (LLMs) struggle to reliably determine if an instruction is ambiguous. To address this challenge, we propose AmbiVer, a two-stage framework that collects explicit visual evidence from multiple views and uses it to guide an vision-language model (VLM) in judging instruction ambiguity. Extensive experiments demonstrate the challenge of our task and the effectiveness of AmbiVer, paving the way for safer and more trustworthy embodied AI. Code and dataset available at https://jiayuding031020.github.io/ambi3d/.
Similar Papers
LLM-based ambiguity detection in natural language instructions for collaborative surgical robots
Robotics
Helps robots understand surgery instructions better.
Teaching Vision-Language Models to Ask: Resolving Ambiguity in Visual Questions
CV and Pattern Recognition
Helps computers ask for help when confused.
Affordance-Based Disambiguation of Surgical Instructions for Collaborative Robot-Assisted Surgery
Robotics
Robot helps surgeons by understanding their spoken words.