Interpretable Safety Alignment via SAE-Constructed Low-Rank Subspace Adaptation
By: Dianyun Wang , Qingsen Ma , Yuhu Shang and more
Potential Business Impact:
Makes AI safer and smarter with less training.
Parameter-efficient fine-tuning has become the dominant paradigm for adapting large language models to downstream tasks. Low-rank adaptation methods such as LoRA operate under the assumption that task-relevant weight updates reside in a low-rank subspace, yet this subspace is learned implicitly from data in a black-box manner, offering no interpretability or direct control. We hypothesize that this difficulty stems from polysemanticity--individual dimensions encoding multiple entangled concepts. To address this, we leverage pre-trained Sparse Autoencoders (SAEs) to identify task-relevant features in a disentangled feature space, then construct an explicit, interpretable low-rank subspace to guide adapter initialization. We provide theoretical analysis proving that under monosemanticity assumptions, SAE-based subspace identification achieves arbitrarily small recovery error, while direct identification in polysemantic space suffers an irreducible error floor. On safety alignment, our method achieves up to 99.6% safety rate--exceeding full fine-tuning by 7.4 percentage points and approaching RLHF-based methods--while updating only 0.19-0.24% of parameters. Crucially, our method provides interpretable insights into the learned alignment subspace through the semantic grounding of SAE features. Our work demonstrates that incorporating mechanistic interpretability into the fine-tuning process can simultaneously improve both performance and transparency.
Similar Papers
SaLoRA: Safety-Alignment Preserved Low-Rank Adaptation
Machine Learning (CS)
Keeps AI safe when learning new things.
Decoupling Safety into Orthogonal Subspace: Cost-Efficient and Performance-Preserving Alignment for Large Language Models
Computation and Language
Makes AI safe without losing smarts.
Low-Rank Adapting Models for Sparse Autoencoders
Machine Learning (CS)
Makes AI understand itself better, faster.