RephQA: Evaluating Readability of Large Language Models in Public Health Question Answering
By: Weikang Qiu , Tinglin Huang , Ryan Rullo and more
Potential Business Impact:
Helps doctors explain health problems simply.
Large Language Models (LLMs) hold promise in addressing complex medical problems. However, while most prior studies focus on improving accuracy and reasoning abilities, a significant bottleneck in developing effective healthcare agents lies in the readability of LLM-generated responses, specifically, their ability to answer public health problems clearly and simply to people without medical backgrounds. In this work, we introduce RephQA, a benchmark for evaluating the readability of LLMs in public health question answering (QA). It contains 533 expert-reviewed QA pairs from 27 sources across 13 topics, and includes a proxy multiple-choice task to assess informativeness, along with two readability metrics: Flesch-Kincaid grade level and professional score. Evaluation of 25 LLMs reveals that most fail to meet readability standards, highlighting a gap between reasoning and effective communication. To address this, we explore four readability-enhancing strategies-standard prompting, chain-of-thought prompting, Group Relative Policy Optimization (GRPO), and a token-adapted variant. Token-adapted GRPO achieves the best results, advancing the development of more practical and user-friendly public health agents. These results represent a step toward building more practical agents for public health.
Similar Papers
Healthy LLMs? Benchmarking LLM Knowledge of UK Government Public Health Information
Computation and Language
Tests if AI knows UK health advice.
Structured Outputs Enable General-Purpose LLMs to be Medical Experts
Computation and Language
Helps AI give safer, smarter answers about health.
Automatic Evaluation of Healthcare LLMs Beyond Question-Answering
Computation and Language
Tests AI for doctor answers, finds flaws.