Assessing and Mitigating Medical Knowledge Drift and Conflicts in Large Language Models
By: Weiyi Wu , Xinwen Xu , Chongyang Gao and more
Potential Business Impact:
Makes AI doctors give up-to-date advice.
Large Language Models (LLMs) have great potential in the field of health care, yet they face great challenges in adapting to rapidly evolving medical knowledge. This can lead to outdated or contradictory treatment suggestions. This study investigated how LLMs respond to evolving clinical guidelines, focusing on concept drift and internal inconsistencies. We developed the DriftMedQA benchmark to simulate guideline evolution and assessed the temporal reliability of various LLMs. Our evaluation of seven state-of-the-art models across 4,290 scenarios demonstrated difficulties in rejecting outdated recommendations and frequently endorsing conflicting guidance. Additionally, we explored two mitigation strategies: Retrieval-Augmented Generation and preference fine-tuning via Direct Preference Optimization. While each method improved model performance, their combination led to the most consistent and reliable results. These findings underscore the need to improve LLM robustness to temporal shifts to ensure more dependable applications in clinical practice. The dataset is available at https://huggingface.co/datasets/RDBH/DriftMed.
Similar Papers
Facts Fade Fast: Evaluating Memorization of Outdated Medical Knowledge in Large Language Models
Computation and Language
AI doctors remember new medical facts.
Medical large language models are easily distracted
Computation and Language
Helps doctors' AI understand patient talk better.
Dr. GPT Will See You Now, but Should It? Exploring the Benefits and Harms of Large Language Models in Medical Diagnosis using Crowdsourced Clinical Cases
Computers and Society
AI helps answer everyday health questions accurately.