Self-Destructive Language Model
By: Yuhui Wang, Rongyi Zhu, Ting Wang
Potential Business Impact:
Makes AI models refuse bad instructions.
Harmful fine-tuning attacks pose a major threat to the security of large language models (LLMs), allowing adversaries to compromise safety guardrails with minimal harmful data. While existing defenses attempt to reinforce LLM alignment, they fail to address models' inherent "trainability" on harmful data, leaving them vulnerable to stronger attacks with increased learning rates or larger harmful datasets. To overcome this critical limitation, we introduce SEAM, a novel alignment-enhancing defense that transforms LLMs into self-destructive models with intrinsic resilience to misalignment attempts. Specifically, these models retain their capabilities for legitimate tasks while exhibiting substantial performance degradation when fine-tuned on harmful data. The protection is achieved through a novel loss function that couples the optimization trajectories of benign and harmful data, enhanced with adversarial gradient ascent to amplify the self-destructive effect. To enable practical training, we develop an efficient Hessian-free gradient estimate with theoretical error bounds. Extensive evaluation across LLMs and datasets demonstrates that SEAM creates a no-win situation for adversaries: the self-destructive models achieve state-of-the-art robustness against low-intensity attacks and undergo catastrophic performance collapse under high-intensity attacks, rendering them effectively unusable. (warning: this paper contains potentially harmful content generated by LLMs.)
Similar Papers
Self-Adapting Language Models
Machine Learning (CS)
Lets computers learn new things by themselves.
Revealing the Intrinsic Ethical Vulnerability of Aligned Large Language Models
Computation and Language
AI can still be tricked into saying bad things.
Self-Refining Language Model Anonymizers via Adversarial Distillation
Computation and Language
Makes computers hide private info in text.