Score: 0

Steering in the Shadows: Causal Amplification for Activation Space Attacks in Large Language Models

Published: November 21, 2025 | arXiv ID: 2511.17194v1

By: Zhiyuan Xu , Stanislav Abaimov , Joseph Gardiner and more

Potential Business Impact:

Makes AI do bad things by changing its thoughts.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Modern large language models (LLMs) are typically secured by auditing data, prompts, and refusal policies, while treating the forward pass as an implementation detail. We show that intermediate activations in decoder-only LLMs form a vulnerable attack surface for behavioral control. Building on recent findings on attention sinks and compression valleys, we identify a high-gain region in the residual stream where small, well-aligned perturbations are causally amplified along the autoregressive trajectory--a Causal Amplification Effect (CAE). We exploit this as an attack surface via Sensitivity-Scaled Steering (SSS), a progressive activation-level attack that combines beginning-of-sequence (BOS) anchoring with sensitivity-based reinforcement to focus a limited perturbation budget on the most vulnerable layers and tokens. We show that across multiple open-weight models and four behavioral axes, SSS induces large shifts in evil, hallucination, sycophancy, and sentiment while preserving high coherence and general capabilities, turning activation steering into a concrete security concern for white-box and supply-chain LLM deployments.

Country of Origin
🇬🇧 United Kingdom

Page Count
31 pages

Category
Computer Science:
Cryptography and Security