Score: 1

Multi-Faceted Evaluation of Tool-Augmented Dialogue Systems

Published: October 22, 2025 | arXiv ID: 2510.19186v1

By: Zhaoyi Joey Hou , Tanya Shourya , Yingfan Wang and more

BigTech Affiliations: Amazon

Potential Business Impact:

Finds hidden mistakes in talking computer helpers.

Business Areas:
Semantic Search Internet Services

Evaluating conversational AI systems that use external tools is challenging, as errors can arise from complex interactions among user, agent, and tools. While existing evaluation methods assess either user satisfaction or agents' tool-calling capabilities, they fail to capture critical errors in multi-turn tool-augmented dialogues-such as when agents misinterpret tool results yet appear satisfactory to users. We introduce TRACE, a benchmark of systematically synthesized tool-augmented conversations covering diverse error cases, and SCOPE, an evaluation framework that automatically discovers diverse error patterns and evaluation rubrics in tool-augmented dialogues. Experiments show SCOPE significantly outperforms the baseline, particularly on challenging cases where user satisfaction signals are misleading.

Country of Origin
🇺🇸 United States

Page Count
30 pages

Category
Computer Science:
Computation and Language