Investigating the Robustness of Retrieval-Augmented Generation at the Query Level
By: Sezen Perçin , Xin Su , Qutub Sha Syed and more
Potential Business Impact:
Makes AI smarter by improving how it finds answers.
Large language models (LLMs) are very costly and inefficient to update with new information. To address this limitation, retrieval-augmented generation (RAG) has been proposed as a solution that dynamically incorporates external knowledge during inference, improving factual consistency and reducing hallucinations. Despite its promise, RAG systems face practical challenges-most notably, a strong dependence on the quality of the input query for accurate retrieval. In this paper, we investigate the sensitivity of different components in the RAG pipeline to various types of query perturbations. Our analysis reveals that the performance of commonly used retrievers can degrade significantly even under minor query variations. We study each module in isolation as well as their combined effect in an end-to-end question answering setting, using both general-domain and domain-specific datasets. Additionally, we propose an evaluation framework to systematically assess the query-level robustness of RAG pipelines and offer actionable recommendations for practitioners based on the results of more than 1092 experiments we performed.
Similar Papers
Evaluating the Retrieval Robustness of Large Language Models
Computation and Language
Makes AI smarter by checking its facts.
Retrieval-Augmented Generation: A Comprehensive Survey of Architectures, Enhancements, and Robustness Frontiers
Information Retrieval
Helps computers answer questions with real-world facts.
When Retrieval Succeeds and Fails: Rethinking Retrieval-Augmented Generation for LLMs
Computation and Language
Helps smart computers learn new things faster.