AutoTool: Efficient Tool Selection for Large Language Model Agents
By: Jingyi Jia, Qinbin Li
Potential Business Impact:
Makes smart computer helpers work faster and cheaper.
Large Language Model (LLM) agents have emerged as powerful tools for automating complex tasks by leveraging the reasoning and decision-making abilities of LLMs. However, a major bottleneck in current agent frameworks lies in the high inference cost of tool selection, especially in approaches like ReAct that repeatedly invoke the LLM to determine which tool to use at each step. In this work, we propose AutoTool, a novel graph-based framework that bypasses repeated LLM inference by exploiting a key empirical observation: tool usage inertia - the tendency of tool invocations to follow predictable sequential patterns. AutoTool constructs a directed graph from historical agent trajectories, where nodes represent tools and edges capture transition probabilities, effectively modeling the inertia in tool selection. It further integrates parameter-level information to refine tool input generation. By traversing this structured representation, AutoTool efficiently selects tools and their parameters with minimal reliance on LLM inference. Extensive experiments across diverse agent tasks demonstrate that AutoTool reduces inference costs by up to 30% while maintaining competitive task completion rates, offering a practical and scalable enhancement for inference-heavy frameworks. Our work highlights the promise of integrating statistical structure into LLM agent design for greater efficiency without sacrificing performance.
Similar Papers
GTool: Graph Enhanced Tool Planning with Large Language Model
Artificial Intelligence
Helps computers pick the right tools for jobs.
ML-Tool-Bench: Tool-Augmented Planning for ML Tasks
Machine Learning (CS)
Helps AI agents plan complex data tasks better.
From Language to Action: A Review of Large Language Models as Autonomous Agents and Tool Users
Computation and Language
AI learns to think, plan, and improve itself.