Retrieval Quality at Context Limit
By: Max McKinnon
Potential Business Impact:
AI remembers everything, even long stories.
The ability of large language models (LLMs) to recall and retrieve information from long contexts is critical for many real-world applications. Prior work (Liu et al., 2023) reported that LLMs suffer significant drops in retrieval accuracy for facts placed in the middle of large contexts, an effect known as "Lost in the Middle" (LITM). We find the model Gemini 2.5 Flash can answer needle-in-a-haystack questions with great accuracy regardless of document position including when the document is nearly at the input context limit. Our results suggest that the "Lost in the Middle" effect is not present for simple factoid Q\&A in Gemini 2.5 Flash, indicating substantial improvements in long-context retrieval.
Similar Papers
What Works for 'Lost-in-the-Middle' in LLMs? A Study on GM-Extract and Mitigations
Computation and Language
Helps computers remember long stories better.
Context Length Alone Hurts LLM Performance Despite Perfect Retrieval
Computation and Language
Makes computers understand long stories better.
Positional Biases Shift as Inputs Approach Context Window Limits
Computation and Language
Makes computers remember information better, even when it's long.