Findings of the Fourth Shared Task on Multilingual Coreference Resolution: Can LLMs Dethrone Traditional Approaches?
By: Michal Novák , Miloslav Konopík , Anna Nedoluzhko and more
Potential Business Impact:
Helps computers understand who or what is being talked about.
The paper presents an overview of the fourth edition of the Shared Task on Multilingual Coreference Resolution, organized as part of the CODI-CRAC 2025 workshop. As in the previous editions, participants were challenged to develop systems that identify mentions and cluster them according to identity coreference. A key innovation of this year's task was the introduction of a dedicated Large Language Model (LLM) track, featuring a simplified plaintext format designed to be more suitable for LLMs than the original CoNLL-U representation. The task also expanded its coverage with three new datasets in two additional languages, using version 1.3 of CorefUD - a harmonized multilingual collection of 22 datasets in 17 languages. In total, nine systems participated, including four LLM-based approaches (two fine-tuned and two using few-shot adaptation). While traditional systems still kept the lead, LLMs showed clear potential, suggesting they may soon challenge established approaches in future editions.
Similar Papers
CorefInst: Leveraging LLMs for Multilingual Coreference Resolution
Computation and Language
Helps computers understand who "he" or "she" is.
BioCoref: Benchmarking Biomedical Coreference Resolution with LLMs
Computation and Language
Helps computers understand medical writing better.
Correct-Detect: Balancing Performance and Ambiguity Through the Lens of Coreference Resolution in LLMs
Computation and Language
Computers can't always tell who "he" or "she" is.