Traceback of Poisoning Attacks to Retrieval-Augmented Generation
By: Baolei Zhang , Haoran Xin , Minghong Fang and more
Potential Business Impact:
Finds bad info that tricks smart computer programs.
Large language models (LLMs) integrated with retrieval-augmented generation (RAG) systems improve accuracy by leveraging external knowledge sources. However, recent research has revealed RAG's susceptibility to poisoning attacks, where the attacker injects poisoned texts into the knowledge database, leading to attacker-desired responses. Existing defenses, which predominantly focus on inference-time mitigation, have proven insufficient against sophisticated attacks. In this paper, we introduce RAGForensics, the first traceback system for RAG, designed to identify poisoned texts within the knowledge database that are responsible for the attacks. RAGForensics operates iteratively, first retrieving a subset of texts from the database and then utilizing a specially crafted prompt to guide an LLM in detecting potential poisoning texts. Empirical evaluations across multiple datasets demonstrate the effectiveness of RAGForensics against state-of-the-art poisoning attacks. This work pioneers the traceback of poisoned texts in RAG systems, providing a practical and promising defense mechanism to enhance their security.
Similar Papers
Practical Poisoning Attacks against Retrieval-Augmented Generation
Cryptography and Security
Makes AI smarter and harder to trick.
Secure Retrieval-Augmented Generation against Poisoning Attacks
Cryptography and Security
Stops bad info from tricking smart computer programs.
Secure Retrieval-Augmented Generation against Poisoning Attacks
Cryptography and Security
Stops bad info from tricking smart computer programs.