Reducing Latency of LLM Search Agent via Speculation-based Algorithm-System Co-Design
By: Zixiao Huang , Wen Zeng , Tianyu Fu and more
Potential Business Impact:
Makes smart computer searches much faster.
LLM-based search agents achieve strong performance but suffer from severe latency, as each step requires serialized LLM reasoning followed by action of tool execution. We revisit this bottleneck through the lens of speculation. While traditional predict-verify speculation paradigm can break serial execution, its benefit remains limited, as it retains the full original workload and adds extra inference overhead. We observe that early agent steps often involve simple evidence-gathering, where correct actions can often be predicted without full reasoning. Building on these observations, we present SPAgent, an algorithm-system co-design framework that expands the role of speculation in search agents to reduce latency. Algorithmically, SPAgent introduces a two-phase adaptive speculation mechanism that selectively omits verification when safe. System-wise, a two-level scheduler regulates speculative requests based on engine load to ensure speculation remains beneficial. We implement SPAgent in real-world systems. Across extensive experimental settings, SPAgent achieves up to $1.65\times$ end-to-end speedup while maintaining same or even achieving higher accuracy, enabling practical deployment of multi-step search agents.
Similar Papers
Speculative Actions: A Lossless Framework for Faster Agentic Systems
Artificial Intelligence
AI agents act much faster by guessing ahead.
SpecAgent: A Speculative Retrieval and Forecasting Agent for Code Completion
Software Engineering
Helps computers write better code faster.
Dynamic Speculative Agent Planning
Artificial Intelligence
Makes AI faster and cheaper to use.