Score: 2

ToolCritic: Detecting and Correcting Tool-Use Errors in Dialogue Systems

Published: October 19, 2025 | arXiv ID: 2510.17052v1

By: Hassan Hamad , Yingru Xu , Liang Zhao and more

BigTech Affiliations: Amazon

Potential Business Impact:

Fixes AI mistakes when using tools.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Tool-augmented large language models (LLMs) are increasingly employed in real-world applications, but tool usage errors still hinder their reliability. We introduce ToolCritic, a diagnostic framework that evaluates and improves LLM behavior in multi-turn, tool-augmented dialogues. ToolCritic detects eight distinct error types specific to tool-calling (e.g., premature invocation, argument misalignment, and misinterpretation of tool outputs) and provides targeted feedback to the main LLM. The main LLM, assumed to have strong reasoning, task understanding and orchestration capabilities, then revises its response based on ToolCritic's feedback. We systematically define these error categories and construct a synthetic dataset to train ToolCritic. Experimental results on the Schema-Guided Dialogue (SGD) dataset demonstrate that ToolCritic improves tool-calling accuracy by up to 13% over baselines, including zero-shot prompting and self-correction techniques. This represents a promising step toward more robust LLM integration with external tools in real-world dialogue applications.

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
20 pages

Category
Computer Science:
Artificial Intelligence