A Survey of Attacks on Large Language Models
By: Wenrui Xu, Keshab K. Parhi
Potential Business Impact:
Protects smart computer programs from being tricked.
Large language models (LLMs) and LLM-based agents have been widely deployed in a wide range of applications in the real world, including healthcare diagnostics, financial analysis, customer support, robotics, and autonomous driving, expanding their powerful capability of understanding, reasoning, and generating natural languages. However, the wide deployment of LLM-based applications exposes critical security and reliability risks, such as the potential for malicious misuse, privacy leakage, and service disruption that weaken user trust and undermine societal safety. This paper provides a systematic overview of the details of adversarial attacks targeting both LLMs and LLM-based agents. These attacks are organized into three phases in LLMs: Training-Phase Attacks, Inference-Phase Attacks, and Availability & Integrity Attacks. For each phase, we analyze the details of representative and recently introduced attack methods along with their corresponding defenses. We hope our survey will provide a good tutorial and a comprehensive understanding of LLM security, especially for attacks on LLMs. We desire to raise attention to the risks inherent in widely deployed LLM-based applications and highlight the urgent need for robust mitigation strategies for evolving threats.
Similar Papers
Attack and defense techniques in large language models: A survey and new perspectives
Cryptography and Security
Protects smart computer programs from being tricked.
Security Concerns for Large Language Models: A Survey
Cryptography and Security
Protects smart computer talk from bad guys.
LLM Security: Vulnerabilities, Attacks, Defenses, and Countermeasures
Cryptography and Security
Protects smart computer programs from being tricked.