A Survey on Privacy Risks and Protection in Large Language Models
By: Kang Chen , Xiuze Zhou , Yuanguo Lin and more
Potential Business Impact:
Keeps your secrets safe from smart computer programs.
Although Large Language Models (LLMs) have become increasingly integral to diverse applications, their capabilities raise significant privacy concerns. This survey offers a comprehensive overview of privacy risks associated with LLMs and examines current solutions to mitigate these challenges. First, we analyze privacy leakage and attacks in LLMs, focusing on how these models unintentionally expose sensitive information through techniques such as model inversion, training data extraction, and membership inference. We investigate the mechanisms of privacy leakage, including the unauthorized extraction of training data and the potential exploitation of these vulnerabilities by malicious actors. Next, we review existing privacy protection against such risks, such as inference detection, federated learning, backdoor mitigation, and confidential computing, and assess their effectiveness in preventing privacy leakage. Furthermore, we highlight key practical challenges and propose future research directions to develop secure and privacy-preserving LLMs, emphasizing privacy risk assessment, secure knowledge transfer between models, and interdisciplinary frameworks for privacy governance. Ultimately, this survey aims to establish a roadmap for addressing escalating privacy challenges in the LLMs domain.
Similar Papers
Beyond Data Privacy: New Privacy Risks for Large Language Models
Cryptography and Security
Protects your secrets from smart computer programs.
Position: Privacy Is Not Just Memorization!
Cryptography and Security
Protects your secrets from smart computer programs.
SoK: The Privacy Paradox of Large Language Models: Advancements, Privacy Risks, and Mitigation
Cryptography and Security
Keeps your private info safe from smart computer programs.