JudgeAgent: Dynamically Evaluate LLMs with Agent-as-Interviewer
By: Zhichao Shi , Xuhui Jiang , Chengjin Xu and more
Potential Business Impact:
Tests AI better by asking harder, changing questions.
Evaluating the capabilities of large language models (LLMs) is an essential step to ensure the successful application of LLMs across various domains. The current evaluation of LLMs is based on a paradigm that involves querying them with predefined question sets and assessing their outputs. This paradigm offers controllable processes and simplicity, but faces challenges such as limited interaction with targets, insufficient difficulty control, and difficulties in verifying the validity of evaluation results, making it hard to precisely determine the knowledge and capability boundaries of target models. To address these challenges, we propose JudgeAgent, a knowledge-target adaptive dynamic evaluation framework based on a new interviewer-style evaluation paradigm. JudgeAgent employs a comprehensive evaluation approach consisting of benchmark grading, interactive extension, and evaluation feedback. It utilizes knowledge-driven data synthesis and target-adaptive difficulty adjustment methods to conduct extended testing, providing accurate and effective evaluation results. We also introduce a novel insight into validating evaluation methods, demonstrating the effectiveness of JudgeAgent and its dynamic evaluation paradigm through extensive experiments.
Similar Papers
JudgeAgent: Knowledge-wise and Dynamic LLM Evaluation with Agent-as-Interviewer
Computation and Language
Tests AI better to make it smarter.
Multi-Agent LLM Judge: automatic personalized LLM judge design for evaluating natural language generation applications
Computation and Language
Helps computers judge writing better than people.
When AIs Judge AIs: The Rise of Agent-as-a-Judge Evaluation for LLMs
Artificial Intelligence
AI judges check other AI's work for mistakes.