Super Suffixes: Bypassing Text Generation Alignment and Guard Models Simultaneously
By: Andrew Adiletta , Kathryn Adiletta , Kemal Derya and more
Potential Business Impact:
Breaks through AI safety guards, then fixes them.
The rapid deployment of Large Language Models (LLMs) has created an urgent need for enhanced security and privacy measures in Machine Learning (ML). LLMs are increasingly being used to process untrusted text inputs and even generate executable code, often while having access to sensitive system controls. To address these security concerns, several companies have introduced guard models, which are smaller, specialized models designed to protect text generation models from adversarial or malicious inputs. In this work, we advance the study of adversarial inputs by introducing Super Suffixes, suffixes capable of overriding multiple alignment objectives across various models with different tokenization schemes. We demonstrate their effectiveness, along with our joint optimization technique, by successfully bypassing the protection mechanisms of Llama Prompt Guard 2 on five different text generation models for malicious text and code generation. To the best of our knowledge, this is the first work to reveal that Llama Prompt Guard 2 can be compromised through joint optimization. Additionally, by analyzing the changing similarity of a model's internal state to specific concept directions during token sequence processing, we propose an effective and lightweight method to detect Super Suffix attacks. We show that the cosine similarity between the residual stream and certain concept directions serves as a distinctive fingerprint of model intent. Our proposed countermeasure, DeltaGuard, significantly improves the detection of malicious prompts generated through Super Suffixes. It increases the non-benign classification rate to nearly 100%, making DeltaGuard a valuable addition to the guard model stack and enhancing robustness against adversarial prompt attacks.
Similar Papers
Universal Adversarial Suffixes Using Calibrated Gumbel-Softmax Relaxation
Computation and Language
Makes AI models easily fooled by bad words.
Adversarial Suffix Filtering: a Defense Pipeline for LLMs
Machine Learning (CS)
Stops smart computers from being tricked.
Universal and Transferable Adversarial Attack on Large Language Models Using Exponentiated Gradient Descent
Machine Learning (CS)
Stops smart computers from being tricked.