Score: 1

Teaching Vision-Language Models to Ask: Resolving Ambiguity in Visual Questions

Published: July 18, 2025 | arXiv ID: 2507.13773v2

By: Pu Jian , Donglei Yu , Wen Yang and more

Potential Business Impact:

Helps computers ask for help when confused.

Business Areas:
Semantic Search Internet Services

In visual question answering (VQA) context, users often pose ambiguous questions to visual language models (VLMs) due to varying expression habits. Existing research addresses such ambiguities primarily by rephrasing questions. These approaches neglect the inherently interactive nature of user interactions with VLMs, where ambiguities can be clarified through user feedback. However, research on interactive clarification faces two major challenges: (1) Benchmarks are absent to assess VLMs' capacity for resolving ambiguities through interaction; (2) VLMs are trained to prefer answering rather than asking, preventing them from seeking clarification. To overcome these challenges, we introduce \textbf{ClearVQA} benchmark, which targets three common categories of ambiguity in VQA context, and encompasses various VQA scenarios.

Repos / Data Links

Page Count
20 pages

Category
Computer Science:
CV and Pattern Recognition