Improving LLMs' Learning for Coreference Resolution
By: Yujian Gan , Yuan Liang , Yanni Lin and more
Potential Business Impact:
Helps computers understand who "he" or "she" is.
Coreference Resolution (CR) is crucial for many NLP tasks, but existing LLMs struggle with hallucination and under-performance. In this paper, we investigate the limitations of existing LLM-based approaches to CR-specifically the Question-Answering (QA) Template and Document Template methods and propose two novel techniques: Reversed Training with Joint Inference and Iterative Document Generation. Our experiments show that Reversed Training improves the QA Template method, while Iterative Document Generation eliminates hallucinations in the generated source text and boosts coreference resolution. Integrating these methods and techniques offers an effective and robust solution to LLM-based coreference resolution.
Similar Papers
CorefInst: Leveraging LLMs for Multilingual Coreference Resolution
Computation and Language
Helps computers understand who "he" or "she" is.
Findings of the Fourth Shared Task on Multilingual Coreference Resolution: Can LLMs Dethrone Traditional Approaches?
Computation and Language
Helps computers understand who or what is being talked about.
ImCoref-CeS: An Improved Lightweight Pipeline for Coreference Resolution with LLM-based Checker-Splitter Refinement
Computation and Language
Helps computers understand who "he" or "she" is.