Enhancing Coreference Resolution with Pretrained Language Models: Bridging the Gap Between Syntax and Semantics
By: Xingzu Liu , Songhang deng , Mingbang Wang and more
Potential Business Impact:
Helps computers understand who or what "they" refers to.
Large language models have made significant advancements in various natural language processing tasks, including coreference resolution. However, traditional methods often fall short in effectively distinguishing referential relationships due to a lack of integration between syntactic and semantic information. This study introduces an innovative framework aimed at enhancing coreference resolution by utilizing pretrained language models. Our approach combines syntax parsing with semantic role labeling to accurately capture finer distinctions in referential relationships. By employing state-of-the-art pretrained models to gather contextual embeddings and applying an attention mechanism for fine-tuning, we improve the performance of coreference tasks. Experimental results across diverse datasets show that our method surpasses conventional coreference resolution systems, achieving notable accuracy in disambiguating references. This development not only improves coreference resolution outcomes but also positively impacts other natural language processing tasks that depend on precise referential understanding.
Similar Papers
Towards Generating Automatic Anaphora Annotations
Computation and Language
Teaches computers to understand tricky word meanings.
Disambiguating Reference in Visually Grounded Dialogues through Joint Modeling of Textual and Multimodal Semantic Structures
Computation and Language
Helps computers understand what you mean in chats.
Coreference Resolution for Vietnamese Narrative Texts
Computation and Language
Helps computers understand Vietnamese stories better.