Score: 1

Improving LLM's Attachment to External Knowledge In Dialogue Generation Tasks Through Entity Anonymization

Published: November 14, 2025 | arXiv ID: 2511.11946v1

By: Hadi Sheikhi, Chenyang Huang, Osmar R. Zaïane

Potential Business Impact:

Helps computers use facts to talk better.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Knowledge graph-based dialogue generation (KG-DG) is a challenging task requiring models to effectively incorporate external knowledge into conversational responses. While large language models (LLMs) have achieved impressive results across various NLP tasks, their ability to utilize external knowledge in KG-DG remains under-explored. We observe that LLMs often rely on internal knowledge, leading to detachment from provided knowledge graphs, even when they are given a flawlessly retrieved knowledge graph. First, we introduce LLM-KAT, an evaluation procedure for measuring knowledge attachment in generated responses. Second, we propose a simple yet effective entity anonymization technique to encourage LLMs to better leverage external knowledge. Experiments on the OpenDialKG dataset demonstrate that our approach improves LLMs' attachment on external knowledge.

Country of Origin
🇨🇦 Canada


Page Count
12 pages

Category
Computer Science:
Computation and Language