Latent Adversarial Training Improves the Representation of Refusal
By: Alexandra Abbas , Nora Petrova , Helios Ael Lyons and more
Potential Business Impact:
Makes AI safer by changing how it says "no."
Recent work has shown that language models' refusal behavior is primarily encoded in a single direction in their latent space, making it vulnerable to targeted attacks. Although Latent Adversarial Training (LAT) attempts to improve robustness by introducing noise during training, a key question remains: How does this noise-based training affect the underlying representation of refusal behavior? Understanding this encoding is crucial for evaluating LAT's effectiveness and limitations, just as the discovery of linear refusal directions revealed vulnerabilities in traditional supervised safety fine-tuning (SSFT). Through the analysis of Llama 2 7B, we examine how LAT reorganizes the refusal behavior in the model's latent space compared to SSFT and embedding space adversarial training (AT). By computing activation differences between harmful and harmless instruction pairs and applying Singular Value Decomposition (SVD), we find that LAT significantly alters the refusal representation, concentrating it in the first two SVD components which explain approximately 75 percent of the activation differences variance - significantly higher than in reference models. This concentrated representation leads to more effective and transferable refusal vectors for ablation attacks: LAT models show improved robustness when attacked with vectors from reference models but become more vulnerable to self-generated vectors compared to SSFT and AT. Our findings suggest that LAT's training perturbations enable a more comprehensive representation of refusal behavior, highlighting both its potential strengths and vulnerabilities for improving model safety.
Similar Papers
LatentGuard: Controllable Latent Steering for Robust Refusal of Attacks and Reliable Response Generation
Artificial Intelligence
Keeps AI helpful but stops it from saying bad things.
Beyond I'm Sorry, I Can't: Dissecting Large Language Model Refusal
Computation and Language
Makes AI ignore safety rules to answer bad questions.
Probing Latent Subspaces in LLM for AI Security: Identifying and Manipulating Adversarial States
Machine Learning (CS)
Makes AI models safer from harmful tricks.