Security Concerns for Large Language Models: A Survey
By: Miles Q. Li, Benjamin C. M. Fung
Potential Business Impact:
Protects smart computer talk from bad guys.
Large Language Models (LLMs) such as ChatGPT and its competitors have caused a revolution in natural language processing, but their capabilities also introduce new security vulnerabilities. This survey provides a comprehensive overview of these emerging concerns, categorizing threats into several key areas: inference-time attacks via prompt manipulation; training-time attacks; misuse by malicious actors; and the inherent risks in autonomous LLM agents. Recently, a significant focus is increasingly being placed on the latter. We summarize recent academic and industrial studies from 2022 to 2025 that exemplify each threat, analyze existing defense mechanisms and their limitations, and identify open challenges in securing LLM-based applications. We conclude by emphasizing the importance of advancing robust, multi-layered security strategies to ensure LLMs are safe and beneficial.
Similar Papers
A Survey on Data Security in Large Language Models
Cryptography and Security
Protects smart computer programs from bad data.
A Survey of Attacks on Large Language Models
Cryptography and Security
Protects smart computer programs from being tricked.
A Survey on Privacy Risks and Protection in Large Language Models
Cryptography and Security
Keeps your secrets safe from smart computer programs.