From Feedback to Checklists: Grounded Evaluation of AI-Generated Clinical Notes
By: Karen Zhou , John Giorgi , Pranav Mani and more
Potential Business Impact:
Helps doctors check AI notes for mistakes.
AI-generated clinical notes are increasingly used in healthcare, but evaluating their quality remains a challenge due to high subjectivity and limited scalability of expert review. Existing automated metrics often fail to align with real-world physician preferences. To address this, we propose a pipeline that systematically distills real user feedback into structured checklists for note evaluation. These checklists are designed to be interpretable, grounded in human feedback, and enforceable by LLM-based evaluators. Using deidentified data from over 21,000 clinical encounters, prepared in accordance with the HIPAA safe harbor standard, from a deployed AI medical scribe system, we show that our feedback-derived checklist outperforms baseline approaches in our offline evaluations in coverage, diversity, and predictive power for human ratings. Extensive experiments confirm the checklist's robustness to quality-degrading perturbations, significant alignment with clinician preferences, and practical value as an evaluation methodology. In offline research settings, the checklist can help identify notes likely to fall below our chosen quality thresholds.
Similar Papers
GAPS: A Clinically Grounded, Automated Benchmark for Evaluating AI Clinicians
Computation and Language
Tests AI doctors for real-world safety.
Rethinking Evidence Hierarchies in Medical Language Benchmarks: A Critical Evaluation of HealthBench
Artificial Intelligence
Makes health AI trustworthy using proven guidelines
How to Evaluate Medical AI
Artificial Intelligence
Helps AI doctors agree with human doctors.