Multi-Faceted Evaluation of Tool-Augmented Dialogue Systems
By: Zhaoyi Joey Hou , Tanya Shourya , Yingfan Wang and more
Potential Business Impact:
Finds hidden mistakes in talking computer helpers.
Evaluating conversational AI systems that use external tools is challenging, as errors can arise from complex interactions among user, agent, and tools. While existing evaluation methods assess either user satisfaction or agents' tool-calling capabilities, they fail to capture critical errors in multi-turn tool-augmented dialogues-such as when agents misinterpret tool results yet appear satisfactory to users. We introduce TRACE, a benchmark of systematically synthesized tool-augmented conversations covering diverse error cases, and SCOPE, an evaluation framework that automatically discovers diverse error patterns and evaluation rubrics in tool-augmented dialogues. Experiments show SCOPE significantly outperforms the baseline, particularly on challenging cases where user satisfaction signals are misleading.
Similar Papers
Beyond the Final Answer: Evaluating the Reasoning Trajectories of Tool-Augmented Agents
Artificial Intelligence
Checks if AI solves problems the right way.
ToolCritic: Detecting and Correcting Tool-Use Errors in Dialogue Systems
Artificial Intelligence
Fixes AI mistakes when using tools.
ToolScope: An Agentic Framework for Vision-Guided and Long-Horizon Tool Use
Artificial Intelligence
Helps computers understand pictures and answer questions.