A Survey on Large Language Model Benchmarks
By: Shiwen Ni , Guhong Chen , Shuaimin Li and more
Potential Business Impact:
Tests AI language skills, finds flaws, suggests fixes.
In recent years, with the rapid development of the depth and breadth of large language models' capabilities, various corresponding evaluation benchmarks have been emerging in increasing numbers. As a quantitative assessment tool for model performance, benchmarks are not only a core means to measure model capabilities but also a key element in guiding the direction of model development and promoting technological innovation. We systematically review the current status and development of large language model benchmarks for the first time, categorizing 283 representative benchmarks into three categories: general capabilities, domain-specific, and target-specific. General capability benchmarks cover aspects such as core linguistics, knowledge, and reasoning; domain-specific benchmarks focus on fields like natural sciences, humanities and social sciences, and engineering technology; target-specific benchmarks pay attention to risks, reliability, agents, etc. We point out that current benchmarks have problems such as inflated scores caused by data contamination, unfair evaluation due to cultural and linguistic biases, and lack of evaluation on process credibility and dynamic environments, and provide a referable design paradigm for future benchmark innovation.
Similar Papers
Domain Specific Benchmarks for Evaluating Multimodal Large Language Models
Machine Learning (CS)
Organizes AI tests for different subjects.
Evaluating Arabic Large Language Models: A Survey of Benchmarks, Methods, and Gaps
Computation and Language
Helps computers understand Arabic better.
Evaluating Arabic Large Language Models: A Survey of Benchmarks, Methods, and Gaps
Computation and Language
Tests how well computer programs understand Arabic.