Score: 1

Retrieval Quality at Context Limit

Published: November 8, 2025 | arXiv ID: 2511.05850v1

By: Max McKinnon

BigTech Affiliations: Google

Potential Business Impact:

AI remembers everything, even long stories.

Business Areas:
Semantic Search Internet Services

The ability of large language models (LLMs) to recall and retrieve information from long contexts is critical for many real-world applications. Prior work (Liu et al., 2023) reported that LLMs suffer significant drops in retrieval accuracy for facts placed in the middle of large contexts, an effect known as "Lost in the Middle" (LITM). We find the model Gemini 2.5 Flash can answer needle-in-a-haystack questions with great accuracy regardless of document position including when the document is nearly at the input context limit. Our results suggest that the "Lost in the Middle" effect is not present for simple factoid Q\&A in Gemini 2.5 Flash, indicating substantial improvements in long-context retrieval.

Country of Origin
🇺🇸 United States

Page Count
3 pages

Category
Computer Science:
Information Retrieval