Where Knowledge Collides: A Mechanistic Study of Intra-Memory Knowledge Conflict in Language Models
By: Minh Vu Pham, Hsuvas Borkakoty, Yufang Hou
In language models (LMs), intra-memory knowledge conflict largely arises when inconsistent information about the same event is encoded within the model's parametric knowledge. While prior work has primarily focused on resolving conflicts between a model's internal knowledge and external resources through approaches such as fine-tuning or knowledge editing, the problem of localizing conflicts that originate during pre-training within the model's internal representations remain unexplored. In this work, we design a framework based on mechanistic interpretability methods to identify where and how conflicting knowledge from the pre-training data is encoded within LMs. Our findings contribute to a growing body of evidence that specific internal components of a language model are responsible for encoding conflicting knowledge from pre-training, and we demonstrate how mechanistic interpretability methods can be leveraged to causally intervene in and control conflicting knowledge at inference time.
Similar Papers
When Abundance Conceals Weakness: Knowledge Conflict in Multilingual Models
Computation and Language
Helps computers learn facts from many languages.
Task Matters: Knowledge Requirements Shape LLM Responses to Context-Memory Conflict
Computation and Language
Helps computers know when to trust new facts.
When Seeing Overrides Knowing: Disentangling Knowledge Conflicts in Vision-Language Models
CV and Pattern Recognition
Fixes AI mistakes by showing what it sees.