Beyond Max Tokens: Stealthy Resource Amplification via Tool Calling Chains in LLM Agents
By: Kaiyu Zhou , Yongsen Zheng , Yicheng He and more
Potential Business Impact:
Makes AI assistants waste time and money.
The agent-tool communication loop is a critical attack surface in modern Large Language Model (LLM) agents. Existing Denial-of-Service (DoS) attacks, primarily triggered via user prompts or injected retrieval-augmented generation (RAG) context, are ineffective for this new paradigm. They are fundamentally single-turn and often lack a task-oriented approach, making them conspicuous in goal-oriented workflows and unable to exploit the compounding costs of multi-turn agent-tool interactions. We introduce a stealthy, multi-turn economic DoS attack that operates at the tool layer under the guise of a correctly completed task. Our method adjusts text-visible fields and a template-governed return policy in a benign, Model Context Protocol (MCP)-compatible tool server, optimizing these edits with a Monte Carlo Tree Search (MCTS) optimizer. These adjustments leave function signatures unchanged and preserve the final payload, steering the agent into prolonged, verbose tool-calling sequences using text-only notices. This compounds costs across turns, escaping single-turn caps while keeping the final answer correct to evade validation. Across six LLMs on the ToolBench and BFCL benchmarks, our attack expands tasks into trajectories exceeding 60,000 tokens, inflates costs by up to 658x, and raises energy by 100-560x. It drives GPU KV cache occupancy from <1% to 35-74% and cuts co-running throughput by approximately 50%. Because the server remains protocol-compatible and task outcomes are correct, conventional checks fail. These results elevate the agent-tool interface to a first-class security frontier, demanding a paradigm shift from validating final answers to monitoring the economic and computational cost of the entire agentic process.
Similar Papers
LeechHijack: Covert Computational Resource Exploitation in Intelligent Agent Systems
Cryptography and Security
Stops bad tools from stealing computer power.
STAC: When Innocent Tools Form Dangerous Chains to Jailbreak LLM Agents
Cryptography and Security
Finds ways AI can trick itself to do bad things.
Close the Loop: Synthesizing Infinite Tool-Use Data via Multi-Agent Role-Playing
Computation and Language
Teaches computers to use new tools by themselves.