Activation-Guided Local Editing for Jailbreaking Attacks
By: Jiecong Wang , Haoran Li , Hao Peng and more
Potential Business Impact:
Finds AI flaws to build stronger defenses
Jailbreaking is an essential adversarial technique for red-teaming these models to uncover and patch security flaws. However, existing jailbreak methods face significant drawbacks. Token-level jailbreak attacks often produce incoherent or unreadable inputs and exhibit poor transferability, while prompt-level attacks lack scalability and rely heavily on manual effort and human ingenuity. We propose a concise and effective two-stage framework that combines the advantages of these approaches. The first stage performs a scenario-based generation of context and rephrases the original malicious query to obscure its harmful intent. The second stage then utilizes information from the model's hidden states to guide fine-grained edits, effectively steering the model's internal representation of the input from a malicious toward a benign one. Extensive experiments demonstrate that this method achieves state-of-the-art Attack Success Rate, with gains of up to 37.74% over the strongest baseline, and exhibits excellent transferability to black-box models. Our analysis further demonstrates that AGILE maintains substantial effectiveness against prominent defense mechanisms, highlighting the limitations of current safeguards and providing valuable insights for future defense development. Our code is available at https://github.com/yunsaijc/AGILE.
Similar Papers
Immunity memory-based jailbreak detection: multi-agent adaptive guard for large language models
Cryptography and Security
AI learns to remember and block bad instructions.
Anyone Can Jailbreak: Prompt-Based Attacks on LLMs and T2Is
CV and Pattern Recognition
Makes AI ignore rules with tricky words.
Stand on The Shoulders of Giants: Building JailExpert from Previous Attack Experience
Cryptography and Security
Helps hackers trick smart computers into doing bad things.