Benchmarking Failures in Tool-Augmented Language Models
By: Eduardo Treviño , Hugo Contant , James Ngai and more
Potential Business Impact:
Fixes AI when it can't find information.
The integration of tools has extended the capabilities of language models (LMs) beyond vanilla text generation to versatile scenarios. However, tool-augmented language models (TaLMs) often assume 'perfect' information access and tool availability, which may not hold in the real world. To systematically study TaLMs' imperfections, we introduce the FAIL-TALMS benchmark, featuring two major failures: under-specified user queries and non-available tools. FAIL-TALMS contains 1,749 examples using 906 tools across 21 categories, including single- and multi-tool usage. We evaluate top-performing proprietary and open-source models, and find all current models except for Claude struggle to recognize missing tools or information. Further, to study possible mitigation of the failures, we enable real-time human interaction, named the Ask-and-Help (AAH) method, to provide missing information or replace non-functional tools. While AAH can help models solve tasks more correctly when queries are under-specified, it brings minimal benefit when complex tools are broken.
Similar Papers
From Proof to Program: Characterizing Tool-Induced Reasoning Hallucinations in Large Language Models
Computation and Language
Makes AI think less when using tools.
TALE: A Tool-Augmented Framework for Reference-Free Evaluation of Large Language Models
Computation and Language
Tests AI answers using the real internet.
ToLeaP: Rethinking Development of Tool Learning with Large Language Models
Artificial Intelligence
Helps computers learn to use new tools better.