Dynamic Context Selection for Retrieval-Augmented Generation: Mitigating Distractors and Positional Bias
By: Malika Iratni, Mohand Boughanem, Taoufiq Dkaki
Potential Business Impact:
Finds better answers by choosing the best info.
Retrieval Augmented Generation (RAG) enhances language model performance by incorporating external knowledge retrieved from large corpora, which makes it highly suitable for tasks such as open domain question answering. Standard RAG systems typically rely on a fixed top k retrieval strategy, which can either miss relevant information or introduce semantically irrelevant passages, known as distractors, that degrade output quality. Additionally, the positioning of retrieved passages within the input context can influence the model attention and generation outcomes. Context placed in the middle tends to be overlooked, which is an issue known as the "lost in the middle" phenomenon. In this work, we systematically analyze the impact of distractors on generation quality, and quantify their effects under varying conditions. We also investigate how the position of relevant passages within the context window affects their influence on generation. Building on these insights, we propose a context-size classifier that dynamically predicts the optimal number of documents to retrieve based on query-specific informational needs. We integrate this approach into a full RAG pipeline, and demonstrate improved performance over fixed k baselines.
Similar Papers
Context-Guided Dynamic Retrieval for Improving Generation Quality in RAG Models
Computation and Language
Makes AI smarter at answering questions.
Dynamic and Parametric Retrieval-Augmented Generation
Computation and Language
Makes smart computers learn from more information.
A Survey on Knowledge-Oriented Retrieval-Augmented Generation
Computation and Language
Lets computers use outside facts to answer questions.