Revealing the Intrinsic Ethical Vulnerability of Aligned Large Language Models
By: Jiawei Lian , Jianhong Pan , Lefan Wang and more
Potential Business Impact:
AI can still be tricked into saying bad things.
Large language models (LLMs) are foundational explorations to artificial general intelligence, yet their alignment with human values via instruction tuning and preference learning achieves only superficial compliance. Here, we demonstrate that harmful knowledge embedded during pretraining persists as indelible "dark patterns" in LLMs' parametric memory, evading alignment safeguards and resurfacing under adversarial inducement at distributional shifts. In this study, we first theoretically analyze the intrinsic ethical vulnerability of aligned LLMs by proving that current alignment methods yield only local "safety regions" in the knowledge manifold. In contrast, pretrained knowledge remains globally connected to harmful concepts via high-likelihood adversarial trajectories. Building on this theoretical insight, we empirically validate our findings by employing semantic coherence inducement under distributional shifts--a method that systematically bypasses alignment constraints through optimized adversarial prompts. This combined theoretical and empirical approach achieves a 100% attack success rate across 19 out of 23 state-of-the-art aligned LLMs, including DeepSeek-R1 and LLaMA-3, revealing their universal vulnerabilities.
Similar Papers
Unintended Harms of Value-Aligned LLMs: Psychological and Empirical Insights
Computation and Language
Makes AI that learns your values safer.
Thinking Hard, Going Misaligned: Emergent Misalignment in LLMs
Computation and Language
Makes smart computers more dangerous when they think harder.
Misaligned from Within: Large Language Models Reproduce Our Double-Loop Learning Blindness
Human-Computer Interaction
AI learns our bad habits, hindering progress.