Score: 2

LLM Sensitivity Evaluation Framework for Clinical Diagnosis

Published: April 18, 2025 | arXiv ID: 2504.13475v1

By: Chenwei Yan , Xiangling Fu , Yuxuan Xiong and more

Potential Business Impact:

Helps doctors diagnose illnesses more accurately.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large language models (LLMs) have demonstrated impressive performance across various domains. However, for clinical diagnosis, higher expectations are required for LLM's reliability and sensitivity: thinking like physicians and remaining sensitive to key medical information that affects diagnostic reasoning, as subtle variations can lead to different diagnosis results. Yet, existing works focus mainly on investigating the sensitivity of LLMs to irrelevant context and overlook the importance of key information. In this paper, we investigate the sensitivity of LLMs, i.e. GPT-3.5, GPT-4, Gemini, Claude3 and LLaMA2-7b, to key medical information by introducing different perturbation strategies. The evaluation results highlight the limitations of current LLMs in remaining sensitive to key medical information for diagnostic decision-making. The evolution of LLMs must focus on improving their reliability, enhancing their ability to be sensitive to key information, and effectively utilizing this information. These improvements will enhance human trust in LLMs and facilitate their practical application in real-world scenarios. Our code and dataset are available at https://github.com/chenwei23333/DiagnosisQA.

Country of Origin
πŸ‡ΈπŸ‡¬ πŸ‡¨πŸ‡³ Singapore, China

Repos / Data Links

Page Count
12 pages

Category
Computer Science:
Computation and Language