A Women's Health Benchmark for Large Language Models
By: Victoria-Elisabeth Gruber , Razvan Marinescu , Diego Fajardo and more
Potential Business Impact:
AI chatbots give wrong women's health advice.
As large language models (LLMs) become primary sources of health information for millions, their accuracy in women's health remains critically unexamined. We introduce the Women's Health Benchmark (WHB), the first benchmark evaluating LLM performance specifically in women's health. Our benchmark comprises 96 rigorously validated model stumps covering five medical specialties (obstetrics and gynecology, emergency medicine, primary care, oncology, and neurology), three query types (patient query, clinician query, and evidence/policy query), and eight error types (dosage/medication errors, missing critical information, outdated guidelines/treatment recommendations, incorrect treatment advice, incorrect factual information, missing/incorrect differential diagnosis, missed urgency, and inappropriate recommendations). We evaluated 13 state-of-the-art LLMs and revealed alarming gaps: current models show approximately 60\% failure rates on the women's health benchmark, with performance varying dramatically across specialties and error types. Notably, models universally struggle with "missed urgency" indicators, while newer models like GPT-5 show significant improvements in avoiding inappropriate recommendations. Our findings underscore that AI chatbots are not yet fully able of providing reliable advice in women's health.
Similar Papers
Beyond the Rubric: Cultural Misalignment in LLM Benchmarks for Sexual and Reproductive Health
Computers and Society
Makes health chatbots work for different cultures.
HealthBench: Evaluating Large Language Models Towards Improved Human Health
Computation and Language
Tests AI for safe and helpful medical advice.
Benchmarking the Medical Understanding and Reasoning of Large Language Models in Arabic Healthcare Tasks
Computation and Language
Helps computers understand Arabic medical questions.