Optimizing Agentic Language Model Inference via Speculative Tool Calls
By: Daniel Nichols , Prajwal Singhania , Charles Jekel and more
Language models (LMs) are becoming increasingly dependent on external tools. LM-based agentic frameworks frequently interact with their environment via such tools to search files, run code, call APIs, etc. Further, modern reasoning-based LMs use tools such as web search and Python code execution to enhance their reasoning capabilities. While tools greatly improve the capabilities of LMs, they also introduce performance bottlenecks during the inference process. In this paper, we introduce novel systems optimizations to address such performance bottlenecks by speculating tool calls and forcing sequences to remain resident in the inference engine to minimize overheads. Our optimizations lead to throughput improvements of several hundred tokens per second when hosting inference for LM agents. We provide a theoretical analysis of our algorithms to provide insights into speculation configurations that will yield the best performance. Further, we recommend a new "tool cache" API endpoint to enable LM providers to easily adopt these optimizations.
Similar Papers
AutoTool: Efficient Tool Selection for Large Language Model Agents
Artificial Intelligence
Makes smart computer helpers work faster and cheaper.
To Trade or Not to Trade: An Agentic Approach to Estimating Market Risk Improves Trading Decisions
Statistical Finance
Helps computers trade stocks better using math.
A Self-Improving Coding Agent
Artificial Intelligence
Computers fix themselves to do tasks better.