Dynamic Tool Dependency Retrieval for Efficient Function Calling
By: Bhrij Patel , Davide Belli , Amir Jalalirad and more
Function calling agents powered by Large Language Models (LLMs) select external tools to automate complex tasks. On-device agents typically use a retrieval module to select relevant tools, improving performance and reducing context length. However, existing retrieval methods rely on static and limited inputs, failing to capture multi-step tool dependencies and evolving task context. This limitation often introduces irrelevant tools that mislead the agent, degrading efficiency and accuracy. We propose Dynamic Tool Dependency Retrieval (DTDR), a lightweight retrieval method that conditions on both the initial query and the evolving execution context. DTDR models tool dependencies from function calling demonstrations, enabling adaptive retrieval as plans unfold. We benchmark DTDR against state-of-the-art retrieval methods across multiple datasets and LLM backbones, evaluating retrieval precision, downstream task accuracy, and computational efficiency. Additionally, we explore strategies to integrate retrieved tools into prompts. Our results show that dynamic tool retrieval improves function calling success rates between $23\%$ and $104\%$ compared to state-of-the-art static retrievers.
Similar Papers
Tool Graph Retriever: Exploring Dependency Graph-based Tool Retrieval for Large Language Models
Information Retrieval
Helps AI pick the right tools for jobs.
ToolDreamer: Instilling LLM Reasoning Into Tool Retrievers
Computation and Language
Helps computers pick the right tools for jobs.
Tool-to-Agent Retrieval: Bridging Tools and Agents for Scalable LLM Multi-Agent Systems
Computation and Language
Helps AI find the right tool for any job.