Predict the Retrieval! Test time adaptation for Retrieval Augmented Generation
By: Xin Sun , Zhongqi Chen , Qiang Liu and more
Potential Business Impact:
Helps AI answer questions better in new subjects.
Retrieval-Augmented Generation (RAG) has emerged as a powerful approach for enhancing large language models' question-answering capabilities through the integration of external knowledge. However, when adapting RAG systems to specialized domains, challenges arise from distribution shifts, resulting in suboptimal generalization performance. In this work, we propose TTARAG, a test-time adaptation method that dynamically updates the language model's parameters during inference to improve RAG system performance in specialized domains. Our method introduces a simple yet effective approach where the model learns to predict retrieved content, enabling automatic parameter adjustment to the target domain. Through extensive experiments across six specialized domains, we demonstrate that TTARAG achieves substantial performance improvements over baseline RAG systems. Code available at https://github.com/sunxin000/TTARAG.
Similar Papers
TAdaRAG: Task Adaptive Retrieval-Augmented Generation via On-the-Fly Knowledge Graph Construction
Computation and Language
Helps AI answer questions better by finding smarter facts.
Domain-Specific Data Generation Framework for RAG Adaptation
Computation and Language
Helps AI learn from specific books and documents.
TeleRAG: Efficient Retrieval-Augmented Generation Inference with Lookahead Retrieval
Distributed, Parallel, and Cluster Computing
Makes AI answer questions faster using less computer memory.