Internal Representations as Indicators of Hallucinations in Agent Tool Selection
By: Kait Healy , Bharathi Srinivasan , Visakh Madathil and more
Potential Business Impact:
Stops AI from making mistakes when using tools.
Large Language Models (LLMs) have shown remarkable capabilities in tool calling and tool usage, but suffer from hallucinations where they choose incorrect tools, provide malformed parameters and exhibit 'tool bypass' behavior by performing simulations and generating outputs instead of invoking specialized tools or external systems. This undermines the reliability of LLM based agents in production systems as it leads to inconsistent results, and bypasses security and audit controls. Such hallucinations in agent tool selection require early detection and error handling. Unlike existing hallucination detection methods that require multiple forward passes or external validation, we present a computationally efficient framework that detects tool-calling hallucinations in real-time by leveraging LLMs' internal representations during the same forward pass used for generation. We evaluate this approach on reasoning tasks across multiple domains, demonstrating strong detection performance (up to 86.4\% accuracy) while maintaining real-time inference capabilities with minimal computational overhead, particularly excelling at detecting parameter-level hallucinations and inappropriate tool selections, critical for reliable agent deployment.
Similar Papers
LLM-based Agents Suffer from Hallucinations: A Survey of Taxonomy, Methods, and Directions
Artificial Intelligence
Fixes AI mistakes so it tells the truth.
HEAL: An Empirical Study on Hallucinations in Embodied Agents Driven by Large Language Models
Machine Learning (CS)
Fixes robots that get confused by their surroundings.
A comprehensive taxonomy of hallucinations in Large Language Models
Computation and Language
Makes AI tell the truth, not make things up.