Task Matters: Knowledge Requirements Shape LLM Responses to Context-Memory Conflict
By: Kaiser Sun, Fan Bai, Mark Dredze
Potential Business Impact:
Helps computers know when to trust new facts.
Large Language Models require both contextual knowledge and parametric memory, but these sources can disagree. Prior investigations on contextual question answering tasks report a preference toward parametric knowledge under conflict, yet they focus almost exclusively on tasks that should always rely on the given passage, leaving open how this behavior manifests when tasks demand different amounts and kinds of knowledge. We study this question with a model-agnostic diagnostic framework that (i) automatically detects disagreements between a model's beliefs and a curated knowledge set, and (ii) injects controlled conflicts into tasks. The resulting datasets span two orthogonal dimensions: task knowledge reliance and conflict plausibility. Evaluating representative open-source LLMs, we find that: (1) performance degradation from conflict correlates with a task's knowledge reliance; (2) explanatory rationales and simple reiteration both increase context reliance-helpful for context-only tasks but harmful when parametric knowledge should dominate; (3) These behaviors raise concerns about the validity of model-based evaluation and underscore the need to account for knowledge conflict in the deployment of LLMs.
Similar Papers
Do Large Language Models Know Conflict? Investigating Parametric vs. Non-Parametric Knowledge of LLMs for Conflict Forecasting
Computation and Language
Predicts wars using computer brains.
Question Answering under Temporal Conflict: Evaluating and Organizing Evolving Knowledge with LLMs
Computation and Language
Helps computers remember and use new facts.
LLMs Struggle to Perform Counterfactual Reasoning with Parametric Knowledge
Artificial Intelligence
Computers can't easily mix old and new facts.