Score: 1

Findings of the Fourth Shared Task on Multilingual Coreference Resolution: Can LLMs Dethrone Traditional Approaches?

Published: September 22, 2025 | arXiv ID: 2509.17796v1

By: Michal Novák , Miloslav Konopík , Anna Nedoluzhko and more

Potential Business Impact:

Helps computers understand who or what is being talked about.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

The paper presents an overview of the fourth edition of the Shared Task on Multilingual Coreference Resolution, organized as part of the CODI-CRAC 2025 workshop. As in the previous editions, participants were challenged to develop systems that identify mentions and cluster them according to identity coreference. A key innovation of this year's task was the introduction of a dedicated Large Language Model (LLM) track, featuring a simplified plaintext format designed to be more suitable for LLMs than the original CoNLL-U representation. The task also expanded its coverage with three new datasets in two additional languages, using version 1.3 of CorefUD - a harmonized multilingual collection of 22 datasets in 17 languages. In total, nine systems participated, including four LLM-based approaches (two fine-tuned and two using few-shot adaptation). While traditional systems still kept the lead, LLMs showed clear potential, suggesting they may soon challenge established approaches in future editions.

Country of Origin
🇨🇿 Czech Republic

Repos / Data Links

Page Count
24 pages

Category
Computer Science:
Computation and Language