How Real is Your Jailbreak? Fine-grained Jailbreak Evaluation with Anchored Reference
By: Songyang Liu , Chaozhuo Li , Rui Pu and more
Potential Business Impact:
Helps AI understand when it's tricked.
Jailbreak attacks present a significant challenge to the safety of Large Language Models (LLMs), yet current automated evaluation methods largely rely on coarse classifications that focus mainly on harmfulness, leading to substantial overestimation of attack success. To address this problem, we propose FJAR, a fine-grained jailbreak evaluation framework with anchored references. We first categorized jailbreak responses into five fine-grained categories: Rejective, Irrelevant, Unhelpful, Incorrect, and Successful, based on the degree to which the response addresses the malicious intent of the query. This categorization serves as the basis for FJAR. Then, we introduce a novel harmless tree decomposition approach to construct high-quality anchored references by breaking down the original queries. These references guide the evaluator in determining whether the response genuinely fulfills the original query. Extensive experiments demonstrate that FJAR achieves the highest alignment with human judgment and effectively identifies the root causes of jailbreak failures, providing actionable guidance for improving attack strategies.
Similar Papers
Retrieval-Augmented Defense: Adaptive and Controllable Jailbreak Prevention for Large Language Models
Cryptography and Security
Stops AI from saying bad things, even new tricks.
JADES: A Universal Framework for Jailbreak Assessment via Decompositional Scoring
Cryptography and Security
Tests if AI can be tricked.
Latent Fusion Jailbreak: Blending Harmful and Harmless Representations to Elicit Unsafe LLM Outputs
Computation and Language
Breaks AI safety rules by mixing good and bad.