Score: 0

Large language models provide unsafe answers to patient-posed medical questions

Published: July 25, 2025 | arXiv ID: 2507.18905v1

By: Rachel L. Draelos , Samina Afreen , Barbara Blasko and more

Potential Business Impact:

Tests show some AI chatbots give bad medical advice.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Millions of patients are already using large language model (LLM) chatbots for medical advice on a regular basis, raising patient safety concerns. This physician-led red-teaming study compares the safety of four publicly available chatbots--Claude by Anthropic, Gemini by Google, GPT-4o by OpenAI, and Llama3-70B by Meta--on a new dataset, HealthAdvice, using an evaluation framework that enables quantitative and qualitative analysis. In total, 888 chatbot responses are evaluated for 222 patient-posed advice-seeking medical questions on primary care topics spanning internal medicine, women's health, and pediatrics. We find statistically significant differences between chatbots. The rate of problematic responses varies from 21.6 percent (Claude) to 43.2 percent (Llama), with unsafe responses varying from 5 percent (Claude) to 13 percent (GPT-4o, Llama). Qualitative results reveal chatbot responses with the potential to lead to serious patient harm. This study suggests that millions of patients could be receiving unsafe medical advice from publicly available chatbots, and further work is needed to improve the clinical safety of these powerful tools.

Page Count
20 pages

Category
Computer Science:
Computation and Language