Disambiguation in Conversational Question Answering in the Era of LLMs and Agents: A Survey
By: Md Mehrab Tanjim , Yeonjun In , Xiang Chen and more
Potential Business Impact:
Helps computers understand confusing words better.
Ambiguity remains a fundamental challenge in Natural Language Processing (NLP) due to the inherent complexity and flexibility of human language. With the advent of Large Language Models (LLMs), addressing ambiguity has become even more critical due to their expanded capabilities and applications. In the context of Conversational Question Answering (CQA), this paper explores the definition, forms, and implications of ambiguity for language driven systems, particularly in the context of LLMs. We define key terms and concepts, categorize various disambiguation approaches enabled by LLMs, and provide a comparative analysis of their advantages and disadvantages. We also explore publicly available datasets for benchmarking ambiguity detection and resolution techniques and highlight their relevance for ongoing research. Finally, we identify open problems and future research directions, especially in agentic settings, proposing areas for further investigation. By offering a comprehensive review of current research on ambiguities and disambiguation with LLMs, we aim to contribute to the development of more robust and reliable LLM-based systems.
Similar Papers
A Survey of Large Language Model Agents for Question Answering
Computation and Language
Lets computers answer questions by thinking.
An Empirical Study of the Role of Incompleteness and Ambiguity in Interactions with Large Language Models
Computation and Language
Helps computers ask better questions to get answers.
The Illusion of Certainty: Uncertainty quantification for LLMs fails under ambiguity
Machine Learning (CS)
Makes AI understand when it's unsure.