CORE-RAG: Lossless Compression for Retrieval-Augmented LLMs via Reinforcement Learning
By: Ziqiang Cui , Yunpeng Weng , Xing Tang and more
Potential Business Impact:
Makes AI answers smarter by shrinking information.
Retrieval-Augmented Generation (RAG) has emerged as a promising approach to enhance the timeliness of knowledge and the factual accuracy of responses in Large Language Models (LLMs). However, the inclusion of excessive retrieved documents substantially increases the input length, leading to higher computational costs. Previous studies have attempted to compress retrieved documents into shorter texts before in-context integration, but such methods often compromise end-task performance. The lack of well-defined compression targets forces many approaches to rely on fixed heuristics, which cannot guarantee that the compressed content will effectively support the end task. To address these limitations, we propose CORE, a novel method designed to achieve lossless context compression for RAG. CORE employs reinforcement learning to optimize the compression process without relying on predefined compression labels, which enables the compressor to generate summaries that maximize the accuracy of answers generated by the LLM. Extensive experiments on four datasets demonstrate the superiority of our approach. With a high compression ratio of 3\%, our method not only avoids performance degradation compared to prepending full documents across all datasets but also improves the average Exact Match (EM) score by 3.3 points. The code will be released soon.
Similar Papers
Enhancing RAG Efficiency with Adaptive Context Compression
Computation and Language
Makes AI answer questions faster and smarter.
Optimizing Retrieval for RAG via Reinforced Contrastive Learning
Computation and Language
AI learns to find better information for itself.
ECoRAG: Evidentiality-guided Compression for Long Context RAG
Computation and Language
Helps computers answer questions better and faster.