Defending Large Language Models Against Jailbreak Exploits with Responsible AI Considerations
By: Ryan Wong , Hosea David Yu Fei Ng , Dhananjai Sharma and more
Potential Business Impact:
Stops AI from saying bad or unsafe things.
Large Language Models (LLMs) remain susceptible to jailbreak exploits that bypass safety filters and induce harmful or unethical behavior. This work presents a systematic taxonomy of existing jailbreak defenses across prompt-level, model-level, and training-time interventions, followed by three proposed defense strategies. First, a Prompt-Level Defense Framework detects and neutralizes adversarial inputs through sanitization, paraphrasing, and adaptive system guarding. Second, a Logit-Based Steering Defense reinforces refusal behavior through inference-time vector steering in safety-sensitive layers. Third, a Domain-Specific Agent Defense employs the MetaGPT framework to enforce structured, role-based collaboration and domain adherence. Experiments on benchmark datasets show substantial reductions in attack success rate, achieving full mitigation under the agent-based defense. Overall, this study highlights how jailbreaks pose a significant security threat to LLMs and identifies key intervention points for prevention, while noting that defense strategies often involve trade-offs between safety, performance, and scalability. Code is available at: https://github.com/Kuro0911/CS5446-Project
Similar Papers
Do Methods to Jailbreak and Defend LLMs Generalize Across Languages?
Computation and Language
Makes AI safer in all languages.
Evolving Security in LLMs: A Study of Jailbreak Attacks and Defenses
Cryptography and Security
Makes AI safer from bad instructions.
Evaluating Adversarial Vulnerabilities in Modern Large Language Models
Cryptography and Security
Finds ways to trick AI into saying bad things.