Score: 0

JudgeAgent: Dynamically Evaluate LLMs with Agent-as-Interviewer

Published: September 2, 2025 | arXiv ID: 2509.02097v1

By: Zhichao Shi , Xuhui Jiang , Chengjin Xu and more

Potential Business Impact:

Tests AI better by asking harder, changing questions.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Evaluating the capabilities of large language models (LLMs) is an essential step to ensure the successful application of LLMs across various domains. The current evaluation of LLMs is based on a paradigm that involves querying them with predefined question sets and assessing their outputs. This paradigm offers controllable processes and simplicity, but faces challenges such as limited interaction with targets, insufficient difficulty control, and difficulties in verifying the validity of evaluation results, making it hard to precisely determine the knowledge and capability boundaries of target models. To address these challenges, we propose JudgeAgent, a knowledge-target adaptive dynamic evaluation framework based on a new interviewer-style evaluation paradigm. JudgeAgent employs a comprehensive evaluation approach consisting of benchmark grading, interactive extension, and evaluation feedback. It utilizes knowledge-driven data synthesis and target-adaptive difficulty adjustment methods to conduct extended testing, providing accurate and effective evaluation results. We also introduce a novel insight into validating evaluation methods, demonstrating the effectiveness of JudgeAgent and its dynamic evaluation paradigm through extensive experiments.

Page Count
18 pages

Category
Computer Science:
Computation and Language