UTSA-NLP at ArchEHR-QA 2025: Improving EHR Question Answering via Self-Consistency Prompting
By: Sara Shields-Menard , Zach Reimers , Joshua Gardner and more
Potential Business Impact:
Answers doctor questions using patient records.
We describe our system for the ArchEHR-QA Shared Task on answering clinical questions using electronic health records (EHRs). Our approach uses large language models in two steps: first, to find sentences in the EHR relevant to a clinician's question, and second, to generate a short, citation-supported response based on those sentences. We use few-shot prompting, self-consistency, and thresholding to improve the sentence classification step to decide which sentences are essential. We compare several models and find that a smaller 8B model performs better than a larger 70B model for identifying relevant information. Our results show that accurate sentence selection is critical for generating high-quality responses and that self-consistency with thresholding helps make these decisions more reliable.
Similar Papers
Neural at ArchEHR-QA 2025: Agentic Prompt Optimization for Evidence-Grounded Clinical Question Answering
Machine Learning (CS)
Helps doctors find patient info faster.
Toward Human Centered Interactive Clinical Question Answering System
Human-Computer Interaction
Helps doctors find patient info in notes.
A Dataset for Addressing Patient's Information Needs related to Clinical Course of Hospitalization
Computation and Language
Helps doctors answer patient questions using health records.