Score: 0

LONGQAEVAL: Designing Reliable Evaluations of Long-Form Clinical QA under Resource Constraints

Published: October 12, 2025 | arXiv ID: 2510.10415v1

By: Federica Bologna , Tiffany Pan , Matthew Wilkens and more

Potential Business Impact:

Tests doctor AI answers faster, cheaper.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Evaluating long-form clinical question answering (QA) systems is resource-intensive and challenging: accurate judgments require medical expertise and achieving consistent human judgments over long-form text is difficult. We introduce LongQAEval, an evaluation framework and set of evaluation recommendations for limited-resource and high-expertise settings. Based on physician annotations of 300 real patient questions answered by physicians and LLMs, we compare coarse answer-level versus fine-grained sentence-level evaluation over the dimensions of correctness, relevance, and safety. We find that inter-annotator agreement (IAA) varies by dimension: fine-grained annotation improves agreement on correctness, coarse improves agreement on relevance, and judgments on safety remain inconsistent. Additionally, annotating only a small subset of sentences can provide reliability comparable to coarse annotations, reducing cost and effort.

Country of Origin
🇺🇸 United States

Page Count
13 pages

Category
Computer Science:
Computation and Language