Score: 1

RAG LLMs are Not Safer: A Safety Analysis of Retrieval-Augmented Generation for Large Language Models

Published: April 25, 2025 | arXiv ID: 2504.18041v1

By: Bang An, Shiyue Zhang, Mark Dredze

Potential Business Impact:

Makes AI that uses outside info less safe.

Business Areas:
Augmented Reality Hardware, Software

Efforts to ensure the safety of large language models (LLMs) include safety fine-tuning, evaluation, and red teaming. However, despite the widespread use of the Retrieval-Augmented Generation (RAG) framework, AI safety work focuses on standard LLMs, which means we know little about how RAG use cases change a model's safety profile. We conduct a detailed comparative analysis of RAG and non-RAG frameworks with eleven LLMs. We find that RAG can make models less safe and change their safety profile. We explore the causes of this change and find that even combinations of safe models with safe documents can cause unsafe generations. In addition, we evaluate some existing red teaming methods for RAG settings and show that they are less effective than when used for non-RAG settings. Our work highlights the need for safety research and red-teaming methods specifically tailored for RAG LLMs.

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
31 pages

Category
Computer Science:
Computation and Language