SpecAgent: A Speculative Retrieval and Forecasting Agent for Code Completion
By: George Ma , Anurag Koul , Qi Chen and more
Potential Business Impact:
Helps computers write better code faster.
Large Language Models (LLMs) excel at code-related tasks but often struggle in realistic software repositories, where project-specific APIs and cross-file dependencies are crucial. Retrieval-augmented methods mitigate this by injecting repository context at inference time. The low inference-time latency budget affects either retrieval quality or the added latency adversely impacts user experience. We address this limitation with SpecAgent, an agent that improves both latency and code-generation quality by proactively exploring repository files during indexing and constructing speculative context that anticipates future edits in each file. This indexing-time asynchrony allows thorough context computation, masking latency, and the speculative nature of the context improves code-generation quality. Additionally, we identify the problem of future context leakage in existing benchmarks, which can inflate reported performance. To address this, we construct a synthetic, leakage-free benchmark that enables a more realistic evaluation of our agent against baselines. Experiments show that SpecAgent consistently achieves absolute gains of 9-11% (48-58% relative) compared to the best-performing baselines, while significantly reducing inference latency.
Similar Papers
Reducing Latency of LLM Search Agent via Speculation-based Algorithm-System Co-Design
Artificial Intelligence
Makes smart computer searches much faster.
AgentSpec: Customizable Runtime Enforcement for Safe and Reliable LLM Agents
Artificial Intelligence
Keeps AI robots from doing bad or dangerous things.
Speculative Actions: A Lossless Framework for Faster Agentic Systems
Artificial Intelligence
AI agents act much faster by guessing ahead.