Auto-Eval Judge: Towards a General Agentic Framework for Task Completion Evaluation
By: Roshita Bhonsle , Rishav Dutta , Sneha Vavilapalli and more
Potential Business Impact:
Checks AI's thinking, not just its answers.
The increasing adoption of foundation models as agents across diverse domains necessitates a robust evaluation framework. Current methods, such as LLM-as-a-Judge, focus only on final outputs, overlooking the step-by-step reasoning that drives agentic decision-making. Meanwhile, existing Agent-as-a-Judge systems, where one agent evaluates another's task completion, are typically designed for narrow, domain-specific settings. To address this gap, we propose a generalizable, modular framework for evaluating agent task completion independent of the task domain. The framework emulates human-like evaluation by decomposing tasks into sub-tasks and validating each step using available information, such as the agent's output and reasoning. Each module contributes to a specific aspect of the evaluation process, and their outputs are aggregated to produce a final verdict on task completion. We validate our framework by evaluating the Magentic-One Actor Agent on two benchmarks, GAIA and BigCodeBench. Our Judge Agent predicts task success with closer agreement to human evaluations, achieving 4.76% and 10.52% higher alignment accuracy, respectively, compared to the GPT-4o based LLM-as-a-Judge baseline. This demonstrates the potential of our proposed general-purpose evaluation framework.
Similar Papers
Beyond Task Completion: An Assessment Framework for Evaluating Agentic AI Systems
Multiagent Systems
Tests AI agents on how they work together.
When AIs Judge AIs: The Rise of Agent-as-a-Judge Evaluation for LLMs
Artificial Intelligence
AI judges check other AI's work for mistakes.
Multi-Agent LLM Judge: automatic personalized LLM judge design for evaluating natural language generation applications
Computation and Language
Helps computers judge writing better than people.