Knowledge Homophily in Large Language Models
By: Utkarsh Sahu , Zhisheng Qi , Mahantesh Halappanavar and more
Potential Business Impact:
Helps computers learn facts faster and answer questions better.
Large Language Models (LLMs) have been increasingly studied as neural knowledge bases for supporting knowledge-intensive applications such as question answering and fact checking. However, the structural organization of their knowledge remains unexplored. Inspired by cognitive neuroscience findings, such as semantic clustering and priming, where knowing one fact increases the likelihood of recalling related facts, we investigate an analogous knowledge homophily pattern in LLMs. To this end, we map LLM knowledge into a graph representation through knowledge checking at both the triplet and entity levels. After that, we analyze the knowledgeability relationship between an entity and its neighbors, discovering that LLMs tend to possess a similar level of knowledge about entities positioned closer in the graph. Motivated by this homophily principle, we propose a Graph Neural Network (GNN) regression model to estimate entity-level knowledgeability scores for triplets by leveraging their neighborhood scores. The predicted knowledgeability enables us to prioritize checking less well-known triplets, thereby maximizing knowledge coverage under the same labeling budget. This not only improves the efficiency of active labeling for fine-tuning to inject knowledge into LLMs but also enhances multi-hop path retrieval in reasoning-intensive question answering.
Similar Papers
Enhancing Large Language Models with Reliable Knowledge Graphs
Computation and Language
Makes AI smarter and more truthful.
Knowledge Graphs for Enhancing Large Language Models in Entity Disambiguation
Machine Learning (CS)
Helps computers understand facts better, avoiding mistakes.
LLM-empowered knowledge graph construction: A survey
Artificial Intelligence
Helps computers understand and organize information better.