Improving LLM's Attachment to External Knowledge In Dialogue Generation Tasks Through Entity Anonymization
By: Hadi Sheikhi, Chenyang Huang, Osmar R. Zaïane
Potential Business Impact:
Helps computers use facts to talk better.
Knowledge graph-based dialogue generation (KG-DG) is a challenging task requiring models to effectively incorporate external knowledge into conversational responses. While large language models (LLMs) have achieved impressive results across various NLP tasks, their ability to utilize external knowledge in KG-DG remains under-explored. We observe that LLMs often rely on internal knowledge, leading to detachment from provided knowledge graphs, even when they are given a flawlessly retrieved knowledge graph. First, we introduce LLM-KAT, an evaluation procedure for measuring knowledge attachment in generated responses. Second, we propose a simple yet effective entity anonymization technique to encourage LLMs to better leverage external knowledge. Experiments on the OpenDialKG dataset demonstrate that our approach improves LLMs' attachment on external knowledge.
Similar Papers
Knowledge Graphs for Enhancing Large Language Models in Entity Disambiguation
Machine Learning (CS)
Helps computers understand facts better, avoiding mistakes.
Leveraging Knowledge Graphs and LLMs for Context-Aware Messaging
Artificial Intelligence
Makes messages fit each person perfectly.
LLM-empowered knowledge graph construction: A survey
Artificial Intelligence
Helps computers understand and organize information better.