Facts Fade Fast: Evaluating Memorization of Outdated Medical Knowledge in Large Language Models
By: Juraj Vladika, Mahdi Dhaini, Florian Matthes
Potential Business Impact:
AI doctors remember new medical facts.
The growing capabilities of Large Language Models (LLMs) show significant potential to enhance healthcare by assisting medical researchers and physicians. However, their reliance on static training data is a major risk when medical recommendations evolve with new research and developments. When LLMs memorize outdated medical knowledge, they can provide harmful advice or fail at clinical reasoning tasks. To investigate this problem, we introduce two novel question-answering (QA) datasets derived from systematic reviews: MedRevQA (16,501 QA pairs covering general biomedical knowledge) and MedChangeQA (a subset of 512 QA pairs where medical consensus has changed over time). Our evaluation of eight prominent LLMs on the datasets reveals consistent reliance on outdated knowledge across all models. We additionally analyze the influence of obsolete pre-training data and training strategies to explain this phenomenon and propose future directions for mitigation, laying the groundwork for developing more current and reliable medical AI systems.
Similar Papers
Memorization in Large Language Models in Medicine: Prevalence, Characteristics, and Implications
Computation and Language
Doctors' AI remembers patient secrets, good and bad.
Assessing and Mitigating Medical Knowledge Drift and Conflicts in Large Language Models
Computation and Language
Makes AI doctors give up-to-date advice.
Beyond MedQA: Towards Real-world Clinical Decision Making in the Era of LLMs
Computation and Language
Helps doctors make better choices using smart computer programs.