Adversarial Suffix Filtering: a Defense Pipeline for LLMs
By: David Khachaturov, Robert Mullins
Potential Business Impact:
Stops smart computers from being tricked.
Large Language Models (LLMs) are increasingly embedded in autonomous systems and public-facing environments, yet they remain susceptible to jailbreak vulnerabilities that may undermine their security and trustworthiness. Adversarial suffixes are considered to be the current state-of-the-art jailbreak, consistently outperforming simpler methods and frequently succeeding even in black-box settings. Existing defenses rely on access to the internal architecture of models limiting diverse deployment, increase memory and computation footprints dramatically, or can be bypassed with simple prompt engineering methods. We introduce $\textbf{Adversarial Suffix Filtering}$ (ASF), a lightweight novel model-agnostic defensive pipeline designed to protect LLMs against adversarial suffix attacks. ASF functions as an input preprocessor and sanitizer that detects and filters adversarially crafted suffixes in prompts, effectively neutralizing malicious injections. We demonstrate that ASF provides comprehensive defense capabilities across both black-box and white-box attack settings, reducing the attack efficacy of state-of-the-art adversarial suffix generation methods to below 4%, while only minimally affecting the target model's capabilities in non-adversarial scenarios.
Similar Papers
Super Suffixes: Bypassing Text Generation Alignment and Guard Models Simultaneously
Cryptography and Security
Breaks through AI safety guards, then fixes them.
Helping Large Language Models Protect Themselves: An Enhanced Filtering and Summarization System
Computation and Language
Protects AI from bad instructions without retraining.
Universal Adversarial Suffixes Using Calibrated Gumbel-Softmax Relaxation
Computation and Language
Makes AI models easily fooled by bad words.