Differentially Private Synthetic Text Generation for Retrieval-Augmented Generation (RAG)
By: Junki Mori , Kazuya Kakizaki , Taiki Miyagawa and more
Potential Business Impact:
Keeps private information safe when AI learns.
Retrieval-Augmented Generation (RAG) enhances large language models (LLMs) by grounding them in external knowledge. However, its application in sensitive domains is limited by privacy risks. Existing private RAG methods typically rely on query-time differential privacy (DP), which requires repeated noise injection and leads to accumulated privacy loss. To address this issue, we propose DP-SynRAG, a framework that uses LLMs to generate differentially private synthetic RAG databases. Unlike prior methods, the synthetic text can be reused once created, thereby avoiding repeated noise injection and additional privacy costs. To preserve essential information for downstream RAG tasks, DP-SynRAG extends private prediction, which instructs LLMs to generate text that mimics subsampled database records in a DP manner. Experiments show that DP-SynRAG achieves superior performanec to the state-of-the-art private RAG systems while maintaining a fixed privacy budget, offering a scalable solution for privacy-preserving RAG.
Similar Papers
Private-RAG: Answering Multiple Queries with LLMs while Keeping Your Data Private
Machine Learning (CS)
Keeps private information safe when computers answer questions.
Diverse And Private Synthetic Datasets Generation for RAG evaluation: A multi-agent framework
Computation and Language
Makes AI safer by hiding private info.
RAGSynth: Synthetic Data for Robust and Faithful RAG Component Optimization
Artificial Intelligence
Makes AI smarter by teaching it better.