Score: 3

Over-Searching in Search-Augmented Large Language Models

Published: January 9, 2026 | arXiv ID: 2601.05503v1

By: Roy Xie , Deepak Gopinath , David Qiu and more

BigTech Affiliations: Apple

Potential Business Impact:

Makes AI smarter, avoiding fake answers.

Business Areas:
Semantic Search Internet Services

Search-augmented large language models (LLMs) excel at knowledge-intensive tasks by integrating external retrieval. However, they often over-search -- unnecessarily invoking search tool even when it does not improve response quality, which leads to computational inefficiency and hallucinations by incorporating irrelevant context. In this work, we conduct a systematic evaluation of over-searching across multiple dimensions, including query types, model categories, retrieval conditions, and multi-turn conversations. Our finding shows: (i) search generally improves answer accuracy on answerable queries but harms abstention on unanswerable ones; (ii) over-searching is more pronounced in complex reasoning models and deep research systems, is exacerbated by noisy retrieval, and compounds across turns in multi-turn conversations; and (iii) the composition of retrieved evidence is crucial, as the presence of negative evidence improves abstention. To quantify over-searching, we introduce Tokens Per Correctness (TPC), an evaluation metric that captures the performance-cost trade-off for search-augmented LLMs. Lastly, we investigate mitigation approaches at both the query and retrieval levels and release the OverSearchQA to foster continued research into efficient search-augmented LLMs.

Country of Origin
πŸ‡ΊπŸ‡Έ United States

Repos / Data Links

Page Count
28 pages

Category
Computer Science:
Machine Learning (CS)