Score: 0

Test-time Corpus Feedback: From Retrieval to RAG

Published: August 21, 2025 | arXiv ID: 2508.15437v2

By: Mandeep Rathee , V Venktesh , Sean MacAvaney and more

Potential Business Impact:

Lets computers ask better questions to find answers.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Retrieval-Augmented Generation (RAG) has emerged as a standard framework for knowledge-intensive NLP tasks, combining large language models (LLMs) with document retrieval from external corpora. Despite its widespread use, most RAG pipelines continue to treat retrieval and reasoning as isolated components, retrieving documents once and then generating answers without further interaction. This static design often limits performance on complex tasks that require iterative evidence gathering or high-precision retrieval. Recent work in both the information retrieval (IR) and NLP communities has begun to close this gap by introducing adaptive retrieval and ranking methods that incorporate feedback. In this survey, we present a structured overview of advanced retrieval and ranking mechanisms that integrate such feedback. We categorize feedback signals based on their source and role in improving the query, retrieved context, or document pool. By consolidating these developments, we aim to bridge IR and NLP perspectives and highlight retrieval as a dynamic, learnable component of end-to-end RAG systems.

Page Count
18 pages

Category
Computer Science:
Information Retrieval