Optimizing Retrieval for RAG via Reinforced Contrastive Learning
By: Jiawei Zhou, Lei Chen
Potential Business Impact:
AI learns to find better information for itself.
As retrieval-augmented generation (RAG) becomes increasingly widespread, the role of information retrieval (IR) is shifting from retrieving information for human users to retrieving contextual knowledge for artificial intelligence (AI) systems, where relevance becomes difficult to define or annotate beforehand. To address this challenge, we propose R3, a Retrieval framework optimized for RAG through trialand-feedback Reinforced contrastive learning. Unlike prior approaches that rely on annotated or synthetic data for supervised fine-tuning, R3 enables the retriever to dynamically explore and optimize relevance within the RAG environment. During training, the retrieved results interact with the environment to produce contrastive signals that automatically guide the retriever's self-improvement. Extensive experiments across diverse tasks demonstrate that R3 improves RAG performance by 5.2% over the original retriever and surpasses state-of-the-art retrievers by 4.9%, while achieving comparable results to LLM-augmented retrieval and RAG systems built on post-trained or instruction-tuned LLMs. It is both efficient and practical, requiring only 4 GPUs and completing training within a single day.
Similar Papers
Test-time Corpus Feedback: From Retrieval to RAG
Information Retrieval
Lets computers ask better questions to find answers.
Test-time Corpus Feedback: From Retrieval to RAG
Information Retrieval
Makes AI smarter by letting it ask more questions.
OpenRAG: Optimizing RAG End-to-End via In-Context Retrieval Learning
Computation and Language
Makes AI better at finding and using information.