Score: 0

Correct-Detect: Balancing Performance and Ambiguity Through the Lens of Coreference Resolution in LLMs

Published: September 17, 2025 | arXiv ID: 2509.14456v1

By: Amber Shore , Russell Scheinberg , Ameeta Agrawal and more

Potential Business Impact:

Computers can't always tell who "he" or "she" is.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large Language Models (LLMs) are intended to reflect human linguistic competencies. But humans have access to a broad and embodied context, which is key in detecting and resolving linguistic ambiguities, even in isolated text spans. A foundational case of semantic ambiguity is found in the task of coreference resolution: how is a pronoun related to an earlier person mention? This capability is implicit in nearly every downstream task, and the presence of ambiguity at this level can alter performance significantly. We show that LLMs can achieve good performance with minimal prompting in both coreference disambiguation and the detection of ambiguity in coreference, however, they cannot do both at the same time. We present the CORRECT-DETECT trade-off: though models have both capabilities and deploy them implicitly, successful performance balancing these two abilities remains elusive.

Country of Origin
🇺🇸 United States

Page Count
15 pages

Category
Computer Science:
Computation and Language