Score: 1

Evaluating Large Language Models on Rare Disease Diagnosis: A Case Study using House M.D

Published: November 14, 2025 | arXiv ID: 2511.10912v1

By: Arsh Gupta , Ajay Narayanan Sridhar , Bonam Mingole and more

Potential Business Impact:

Helps computers guess rare diseases from stories.

Business Areas:
Health Diagnostics Health Care

Large language models (LLMs) have demonstrated capabilities across diverse domains, yet their performance on rare disease diagnosis from narrative medical cases remains underexplored. We introduce a novel dataset of 176 symptom-diagnosis pairs extracted from House M.D., a medical television series validated for teaching rare disease recognition in medical education. We evaluate four state-of-the-art LLMs such as GPT 4o mini, GPT 5 mini, Gemini 2.5 Flash, and Gemini 2.5 Pro on narrative-based diagnostic reasoning tasks. Results show significant variation in performance, ranging from 16.48% to 38.64% accuracy, with newer model generations demonstrating a 2.3 times improvement. While all models face substantial challenges with rare disease diagnosis, the observed improvement across architectures suggests promising directions for future development. Our educationally validated benchmark establishes baseline performance metrics for narrative medical reasoning and provides a publicly accessible evaluation framework for advancing AI-assisted diagnosis research.

Country of Origin
🇺🇸 United States

Page Count
5 pages

Category
Computer Science:
Computation and Language