Score: 0

Toward Understanding the Transferability of Adversarial Suffixes in Large Language Models

Published: October 24, 2025 | arXiv ID: 2510.22014v1

By: Sarah Ball , Niki Hasrati , Alexander Robey and more

Potential Business Impact:

Makes AI say bad things even when it shouldn't.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Discrete optimization-based jailbreaking attacks on large language models aim to generate short, nonsensical suffixes that, when appended onto input prompts, elicit disallowed content. Notably, these suffixes are often transferable -- succeeding on prompts and models for which they were never optimized. And yet, despite the fact that transferability is surprising and empirically well-established, the field lacks a rigorous analysis of when and why transfer occurs. To fill this gap, we identify three statistical properties that strongly correlate with transfer success across numerous experimental settings: (1) how much a prompt without a suffix activates a model's internal refusal direction, (2) how strongly a suffix induces a push away from this direction, and (3) how large these shifts are in directions orthogonal to refusal. On the other hand, we find that prompt semantic similarity only weakly correlates with transfer success. These findings lead to a more fine-grained understanding of transferability, which we use in interventional experiments to showcase how our statistical analysis can translate into practical improvements in attack success.

Country of Origin
🇩🇪 Germany

Page Count
16 pages

Category
Computer Science:
Computation and Language