Score: 0

Assessing Large Language Models for Structured Medical Order Extraction

Published: October 12, 2025 | arXiv ID: 2510.10475v1

By: A H M Rezaul Karim, Ozlem Uzuner

Potential Business Impact:

Helps doctors quickly find patient orders in notes.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Medical order extraction is essential for structuring actionable clinical information, supporting decision-making, and enabling downstream applications such as documentation and workflow automation. Orders may be embedded in diverse sources, including electronic health records, discharge summaries, and multi-turn doctor-patient dialogues, and can span categories such as medications, laboratory tests, imaging studies, and follow-up actions. The MEDIQA-OE 2025 shared task focuses on extracting structured medical orders from extended conversational transcripts, requiring the identification of order type, description, reason, and provenance. We present the MasonNLP submission, which ranked 5th among 17 participating teams with 105 total submissions. Our approach uses a general-purpose, instruction-tuned LLaMA-4 17B model without domain-specific fine-tuning, guided by a single in-context example. This few-shot configuration achieved an average F1 score of 37.76, with notable improvements in reason and provenance accuracy. These results demonstrate that large, non-domain-specific LLMs, when paired with effective prompt engineering, can serve as strong, scalable baselines for specialized clinical NLP tasks.

Country of Origin
🇺🇸 United States

Page Count
11 pages

Category
Computer Science:
Computation and Language