LLM Security: Vulnerabilities, Attacks, Defenses, and Countermeasures
By: Francisco Aguilera-Martínez, Fernando Berzal
Potential Business Impact:
Protects smart computer programs from being tricked.
As large language models (LLMs) continue to evolve, it is critical to assess the security threats and vulnerabilities that may arise both during their training phase and after models have been deployed. This survey seeks to define and categorize the various attacks targeting LLMs, distinguishing between those that occur during the training phase and those that affect already trained models. A thorough analysis of these attacks is presented, alongside an exploration of defense mechanisms designed to mitigate such threats. Defenses are classified into two primary categories: prevention-based and detection-based defenses. Furthermore, our survey summarizes possible attacks and their corresponding defense strategies. It also provides an evaluation of the effectiveness of the known defense mechanisms for the different security threats. Our survey aims to offer a structured framework for securing LLMs, while also identifying areas that require further research to improve and strengthen defenses against emerging security challenges.
Similar Papers
Attack and defense techniques in large language models: A survey and new perspectives
Cryptography and Security
Protects smart computer programs from being tricked.
A Survey of Attacks on Large Language Models
Cryptography and Security
Protects smart computer programs from being tricked.
LLM in the Middle: A Systematic Review of Threats and Mitigations to Real-World LLM-based Systems
Cryptography and Security
Protects smart computer programs from hackers.