Investigating Tool-Memory Conflicts in Tool-Augmented LLMs
By: Jiali Cheng, Rui Pan, Hadi Amiri
Tool-augmented large language models (LLMs) have powered many applications. However, they are likely to suffer from knowledge conflict. In this paper, we propose a new type of knowledge conflict -- Tool-Memory Conflict (TMC), where the internal parametric knowledge contradicts with the external tool knowledge for tool-augmented LLMs. We find that existing LLMs, though powerful, suffer from TMC, especially on STEM-related tasks. We also uncover that under different conditions, tool knowledge and parametric knowledge may be prioritized differently. We then evaluate existing conflict resolving techniques, including prompting-based and RAG-based methods. Results show that none of these approaches can effectively resolve tool-memory conflicts.
Similar Papers
Task Matters: Knowledge Requirements Shape LLM Responses to Context-Memory Conflict
Computation and Language
Helps computers know when to trust new facts.
ToolMem: Enhancing Multimodal Agents with Learnable Tool Capability Memory
Computation and Language
Helps computers pick the best tool for jobs.
Tool Unlearning for Tool-Augmented LLMs
Machine Learning (CS)
Removes old tools from smart computer programs.