Steering in the Shadows: Causal Amplification for Activation Space Attacks in Large Language Models
By: Zhiyuan Xu , Stanislav Abaimov , Joseph Gardiner and more
Potential Business Impact:
Makes AI do bad things by changing its thoughts.
Modern large language models (LLMs) are typically secured by auditing data, prompts, and refusal policies, while treating the forward pass as an implementation detail. We show that intermediate activations in decoder-only LLMs form a vulnerable attack surface for behavioral control. Building on recent findings on attention sinks and compression valleys, we identify a high-gain region in the residual stream where small, well-aligned perturbations are causally amplified along the autoregressive trajectory--a Causal Amplification Effect (CAE). We exploit this as an attack surface via Sensitivity-Scaled Steering (SSS), a progressive activation-level attack that combines beginning-of-sequence (BOS) anchoring with sensitivity-based reinforcement to focus a limited perturbation budget on the most vulnerable layers and tokens. We show that across multiple open-weight models and four behavioral axes, SSS induces large shifts in evil, hallucination, sycophancy, and sentiment while preserving high coherence and general capabilities, turning activation steering into a concrete security concern for white-box and supply-chain LLM deployments.
Similar Papers
Steering Large Language Model Activations in Sparse Spaces
Machine Learning (CS)
Teaches AI to follow instructions better.
SALT: Steering Activations towards Leakage-free Thinking in Chain of Thought
Cryptography and Security
Keeps private thoughts hidden inside AI assistants.
Patterns and Mechanisms of Contrastive Activation Engineering
Artificial Intelligence
Changes AI answers without retraining it.