ToolCritic: Detecting and Correcting Tool-Use Errors in Dialogue Systems
By: Hassan Hamad , Yingru Xu , Liang Zhao and more
Potential Business Impact:
Fixes AI mistakes when using tools.
Tool-augmented large language models (LLMs) are increasingly employed in real-world applications, but tool usage errors still hinder their reliability. We introduce ToolCritic, a diagnostic framework that evaluates and improves LLM behavior in multi-turn, tool-augmented dialogues. ToolCritic detects eight distinct error types specific to tool-calling (e.g., premature invocation, argument misalignment, and misinterpretation of tool outputs) and provides targeted feedback to the main LLM. The main LLM, assumed to have strong reasoning, task understanding and orchestration capabilities, then revises its response based on ToolCritic's feedback. We systematically define these error categories and construct a synthetic dataset to train ToolCritic. Experimental results on the Schema-Guided Dialogue (SGD) dataset demonstrate that ToolCritic improves tool-calling accuracy by up to 13% over baselines, including zero-shot prompting and self-correction techniques. This represents a promising step toward more robust LLM integration with external tools in real-world dialogue applications.
Similar Papers
CRITICTOOL: Evaluating Self-Critique Capabilities of Large Language Models in Tool-Calling Error Scenarios
Software Engineering
Fixes AI mistakes when using tools.
Multi-Faceted Evaluation of Tool-Augmented Dialogue Systems
Computation and Language
Finds hidden mistakes in talking computer helpers.
AskToAct: Enhancing LLMs Tool Use via Self-Correcting Clarification
Computation and Language
Helps computers ask better questions to understand you.