Self-Routing RAG: Binding Selective Retrieval with Knowledge Verbalization
By: Di Wu , Jia-Chen Gu , Kai-Wei Chang and more
Potential Business Impact:
Helps AI answer questions better, faster, and smarter.
Selective retrieval improves retrieval-augmented generation (RAG) by reducing distractions from low-quality retrievals and improving efficiency. However, existing approaches under-utilize the inherent knowledge of large language models (LLMs), leading to suboptimal retrieval decisions and degraded generation performance. To bridge this gap, we propose Self-Routing RAG (SR-RAG), a novel framework that binds selective retrieval with knowledge verbalization. SR-RAG enables an LLM to dynamically decide between external retrieval and verbalizing its own parametric knowledge. To this end, we design a multi-task objective that jointly optimizes an LLM on knowledge source selection, knowledge verbalization, and response generation. We further introduce dynamic knowledge source inference via nearest neighbor search to improve the accuracy of knowledge source decision under domain shifts. Fine-tuning three LLMs with SR-RAG significantly improves both their response accuracy and inference latency. Compared to the strongest selective retrieval baseline, SR-RAG reduces retrievals by 29% while improving the performance by 5.1%.
Similar Papers
SKILL-RAG: Self-Knowledge Induced Learning and Filtering for Retrieval-Augmented Generation
Computation and Language
Helps computers answer questions better by knowing what's useful.
RouteRAG: Efficient Retrieval-Augmented Generation from Text and Graph via Reinforcement Learning
Computation and Language
Lets computers learn from text and links.
Multimodal Iterative RAG for Knowledge Visual Question Answering
CV and Pattern Recognition
Helps computers answer harder questions using more information.