Score: 0

Private-RAG: Answering Multiple Queries with LLMs while Keeping Your Data Private

Published: November 10, 2025 | arXiv ID: 2511.07637v1

By: Ruihan Wu , Erchi Wang , Zhiyuan Zhang and more

Potential Business Impact:

Keeps private information safe when computers answer questions.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Retrieval-augmented generation (RAG) enhances large language models (LLMs) by retrieving documents from an external corpus at inference time. When this corpus contains sensitive information, however, unprotected RAG systems are at risk of leaking private information. Prior work has introduced differential privacy (DP) guarantees for RAG, but only in single-query settings, which fall short of realistic usage. In this paper, we study the more practical multi-query setting and propose two DP-RAG algorithms. The first, MURAG, leverages an individual privacy filter so that the accumulated privacy loss only depends on how frequently each document is retrieved rather than the total number of queries. The second, MURAG-ADA, further improves utility by privately releasing query-specific thresholds, enabling more precise selection of relevant documents. Our experiments across multiple LLMs and datasets demonstrate that the proposed methods scale to hundreds of queries within a practical DP budget ($\varepsilon\approx10$), while preserving meaningful utility.

Country of Origin
🇺🇸 United States

Page Count
21 pages

Category
Computer Science:
Machine Learning (CS)