Multi-Lingual Implicit Discourse Relation Recognition with Multi-Label Hierarchical Learning
By: Nelson Filipe Costa, Leila Kosseim
Potential Business Impact:
Helps computers understand how sentences connect.
This paper introduces the first multi-lingual and multi-label classification model for implicit discourse relation recognition (IDRR). Our model, HArch, is evaluated on the recently released DiscoGeM 2.0 corpus and leverages hierarchical dependencies between discourse senses to predict probability distributions across all three sense levels in the PDTB 3.0 framework. We compare several pre-trained encoder backbones and find that RoBERTa-HArch achieves the best performance in English, while XLM-RoBERTa-HArch performs best in the multi-lingual setting. In addition, we compare our fine-tuned models against GPT-4o and Llama-4-Maverick using few-shot prompting across all language configurations. Our results show that our fine-tuned models consistently outperform these LLMs, highlighting the advantages of task-specific fine-tuning over prompting in IDRR. Finally, we report SOTA results on the DiscoGeM 1.0 corpus, further validating the effectiveness of our hierarchical approach.
Similar Papers
CLaC at DISRPT 2025: Hierarchical Adapters for Cross-Framework Multi-lingual Discourse Relation Classification
Computation and Language
Helps computers understand how sentences connect.
Synthetic Data Augmentation for Cross-domain Implicit Discourse Relation Recognition
Computation and Language
Computers better understand how sentences connect.
Probing LLMs for Multilingual Discourse Generalization Through a Unified Label Set
Computation and Language
Computers understand how sentences connect across languages.