AutoMedic: An Automated Evaluation Framework for Clinical Conversational Agents with Medical Dataset Grounding
By: Gyutaek Oh, Sangjoon Park, Byung-Hoon Kim
Potential Business Impact:
Tests AI doctors in realistic patient talks.
Evaluating large language models (LLMs) has recently emerged as a critical issue for safe and trustworthy application of LLMs in the medical domain. Although a variety of static medical question-answering (QA) benchmarks have been proposed, many aspects remain underexplored, such as the effectiveness of LLMs in generating responses in dynamic, interactive clinical multi-turn conversation situations and the identification of multi-faceted evaluation strategies beyond simple accuracy. However, formally evaluating a dynamic, interactive clinical situation is hindered by its vast combinatorial space of possible patient states and interaction trajectories, making it difficult to standardize and quantitatively measure such scenarios. Here, we introduce AutoMedic, a multi-agent simulation framework that enables automated evaluation of LLMs as clinical conversational agents. AutoMedic transforms off-the-shelf static QA datasets into virtual patient profiles, enabling realistic and clinically grounded multi-turn clinical dialogues between LLM agents. The performance of various clinical conversational agents is then assessed based on our CARE metric, which provides a multi-faceted evaluation standard of clinical conversational accuracy, efficiency/strategy, empathy, and robustness. Our findings, validated by human experts, demonstrate the validity of AutoMedic as an automated evaluation framework for clinical conversational agents, offering practical guidelines for the effective development of LLMs in conversational medical applications.
Similar Papers
AutoMedEval: Harnessing Language Models for Automatic Medical Capability Evaluation
Computation and Language
Checks doctor AI answers for medical accuracy.
Multi-agent Self-triage System with Medical Flowcharts
Artificial Intelligence
Helps AI give safe health advice like a doctor.
MindEval: Benchmarking Language Models on Multi-turn Mental Health Support
Computation and Language
Tests AI mental health helpers for real problems.