Toward Understanding the Transferability of Adversarial Suffixes in Large Language Models
By: Sarah Ball , Niki Hasrati , Alexander Robey and more
Potential Business Impact:
Makes AI say bad things even when it shouldn't.
Discrete optimization-based jailbreaking attacks on large language models aim to generate short, nonsensical suffixes that, when appended onto input prompts, elicit disallowed content. Notably, these suffixes are often transferable -- succeeding on prompts and models for which they were never optimized. And yet, despite the fact that transferability is surprising and empirically well-established, the field lacks a rigorous analysis of when and why transfer occurs. To fill this gap, we identify three statistical properties that strongly correlate with transfer success across numerous experimental settings: (1) how much a prompt without a suffix activates a model's internal refusal direction, (2) how strongly a suffix induces a push away from this direction, and (3) how large these shifts are in directions orthogonal to refusal. On the other hand, we find that prompt semantic similarity only weakly correlates with transfer success. These findings lead to a more fine-grained understanding of transferability, which we use in interventional experiments to showcase how our statistical analysis can translate into practical improvements in attack success.
Similar Papers
Guiding not Forcing: Enhancing the Transferability of Jailbreaking Attacks on LLMs via Removing Superfluous Constraints
Machine Learning (CS)
Makes AI more easily tricked into bad behavior.
Universal and Transferable Adversarial Attack on Large Language Models Using Exponentiated Gradient Descent
Machine Learning (CS)
Stops smart computers from being tricked.
Universal Jailbreak Suffixes Are Strong Attention Hijackers
Cryptography and Security
Makes AI safer by stopping bad instructions.