TCM-Eval: An Expert-Level Dynamic and Extensible Benchmark for Traditional Chinese Medicine
By: Zihao Cheng , Yuheng Lu , Huaiqian Ye and more
Potential Business Impact:
Helps computers understand old Chinese medicine.
Large Language Models (LLMs) have demonstrated remarkable capabilities in modern medicine, yet their application in Traditional Chinese Medicine (TCM) remains severely limited by the absence of standardized benchmarks and the scarcity of high-quality training data. To address these challenges, we introduce TCM-Eval, the first dynamic and extensible benchmark for TCM, meticulously curated from national medical licensing examinations and validated by TCM experts. Furthermore, we construct a large-scale training corpus and propose Self-Iterative Chain-of-Thought Enhancement (SI-CoTE) to autonomously enrich question-answer pairs with validated reasoning chains through rejection sampling, establishing a virtuous cycle of data and model co-evolution. Using this enriched training data, we develop ZhiMingTang (ZMT), a state-of-the-art LLM specifically designed for TCM, which significantly exceeds the passing threshold for human practitioners. To encourage future research and development, we release a public leaderboard, fostering community engagement and continuous improvement.
Similar Papers
TCM-5CEval: Extended Deep Evaluation Benchmark for LLM's Comprehensive Clinical Research Competence in Traditional Chinese Medicine
Computation and Language
Tests if AI understands old Chinese medicine.
TCM-3CEval: A Triaxial Benchmark for Assessing Responses from Large Language Models in Traditional Chinese Medicine
Computation and Language
Tests AI on ancient Chinese medicine knowledge.
A benchmark dataset for evaluating Syndrome Differentiation and Treatment in large language models
Computation and Language
Helps doctors choose the best Chinese medicine treatments.