MedRedFlag: Investigating how LLMs Redirect Misconceptions in Real-World Health Communication
By: Sraavya Sambara , Yuan Pu , Ayman Ali and more
Real-world health questions from patients often unintentionally embed false assumptions or premises. In such cases, safe medical communication typically involves redirection: addressing the implicit misconception and then responding to the underlying patient context, rather than the original question. While large language models (LLMs) are increasingly being used by lay users for medical advice, they have not yet been tested for this crucial competency. Therefore, in this work, we investigate how LLMs react to false premises embedded within real-world health questions. We develop a semi-automated pipeline to curate MedRedFlag, a dataset of 1100+ questions sourced from Reddit that require redirection. We then systematically compare responses from state-of-the-art LLMs to those from clinicians. Our analysis reveals that LLMs often fail to redirect problematic questions, even when the problematic premise is detected, and provide answers that could lead to suboptimal medical decision making. Our benchmark and results reveal a novel and substantial gap in how LLMs perform under the conditions of real-world health communication, highlighting critical safety concerns for patient-facing medical AI systems. Code and dataset are available at https://github.com/srsambara-1/MedRedFlag.
Similar Papers
Editing with AI: How Doctors Refine LLM-Generated Answers to Patient Queries
Human-Computer Interaction
Helps doctors answer patient questions faster.
Enhancing Patient-Centric Communication: Leveraging LLMs to Simulate Patient Perspectives
Artificial Intelligence
Computers explain health instructions better for patients.
Do LLMs Provide Consistent Answers to Health-Related Questions across Languages?
Computation and Language
Finds health info online is different in other languages.