LLM-Crowdsourced: A Benchmark-Free Paradigm for Mutual Evaluation of Large Language Models
By: Qianhong Guo , Wei Xie , Xiaofang Cai and more
Potential Business Impact:
Tests AI to see if it's smart or just copying.
Although large language models (LLMs) demonstrate remarkable capabilities across various tasks, evaluating their capabilities remains a challenging task. Existing evaluation methods suffer from issues such as data contamination, black-box operation, and subjective preference. These issues make it difficult to evaluate the LLMs' true capabilities comprehensively. To tackle these challenges, we propose a novel benchmark-free evaluation paradigm, LLM-Crowdsourced. It utilizes LLMs to generate questions, answer independently, and evaluate mutually. This method integrates four key evaluation criteria: dynamic, transparent, objective, and professional, which existing evaluation methods cannot satisfy simultaneously. Experiments on eight mainstream LLMs across mathematics and programming verify the advantages of our method in distinguishing LLM performance. Furthermore, our study reveals several novel findings that are difficult for traditional methods to detect, including but not limited to: (1) Gemini demonstrates the highest original and professional question-design capabilities among others; (2) Some LLMs exhibit ''memorization-based answering'' by misrecognizing questions as familiar ones with a similar structure; (3) LLM evaluation results demonstrate high consistency (robustness).
Similar Papers
LLM-Crowdsourced: A Benchmark-Free Paradigm for Mutual Evaluation of Large Language Models
Artificial Intelligence
Tests AI better by having AI ask and answer.
On Robustness and Reliability of Benchmark-Based Evaluation of LLMs
Computation and Language
Tests make smart computers seem less smart.
Toward Generalizable Evaluation in the LLM Era: A Survey Beyond Benchmarks
Computation and Language
Tests AI better as it gets smarter.