Score: 0

Supervised Fine-Tuning or In-Context Learning? Evaluating LLMs for Clinical NER

Published: October 25, 2025 | arXiv ID: 2510.22285v1

By: Andrei Baroian

Potential Business Impact:

Helps doctors find patient problems in notes.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

We study clinical Named Entity Recognition (NER) on the CADEC corpus and compare three families of approaches: (i) BERT-style encoders (BERT Base, BioClinicalBERT, RoBERTa-large), (ii) GPT-4o used with few-shot in-context learning (ICL) under simple vs.\ complex prompts, and (iii) GPT-4o with supervised fine-tuning (SFT). All models are evaluated on standard NER metrics over CADEC's five entity types (ADR, Drug, Disease, Symptom, Finding). RoBERTa-large and BioClinicalBERT offer limited improvements over BERT Base, showing the limit of these family of models. Among LLM settings, simple ICL outperforms a longer, instruction-heavy prompt, and SFT achieves the strongest overall performance (F1 $\approx$ 87.1%), albeit with higher cost. We find that the LLM achieve higher accuracy on simplified tasks, restricting classification to two labels.

Page Count
10 pages

Category
Computer Science:
Computation and Language