SALT: Steering Activations towards Leakage-free Thinking in Chain of Thought
By: Shourya Batra , Pierce Tillman , Samarth Gaggar and more
Potential Business Impact:
Keeps private thoughts hidden inside AI assistants.
As Large Language Models (LLMs) evolve into personal assistants with access to sensitive user data, they face a critical privacy challenge: while prior work has addressed output-level privacy, recent findings reveal that LLMs often leak private information through their internal reasoning processes, violating contextual privacy expectations. These leaky thoughts occur when models inadvertently expose sensitive details in their reasoning traces, even when final outputs appear safe. The challenge lies in preventing such leakage without compromising the model's reasoning capabilities, requiring a delicate balance between privacy and utility. We introduce Steering Activations towards Leakage-free Thinking (SALT), a lightweight test-time intervention that mitigates privacy leakage in model's Chain of Thought (CoT) by injecting targeted steering vectors into hidden state. We identify the high-leakage layers responsible for this behavior. Through experiments across multiple LLMs, we demonstrate that SALT achieves reductions including $18.2\%$ reduction in CPL on QwQ-32B, $17.9\%$ reduction in CPL on Llama-3.1-8B, and $31.2\%$ reduction in CPL on Deepseek in contextual privacy leakage dataset AirGapAgent-R while maintaining comparable task performance and utility. Our work establishes SALT as a practical approach for test-time privacy protection in reasoning-capable language models, offering a path toward safer deployment of LLM-based personal agents.
Similar Papers
Chain-of-Sanitized-Thoughts: Plugging PII Leakage in CoT of Large Reasoning Models
Artificial Intelligence
Keeps private thoughts safe while computers think.
Thought Purity: Defense Paradigm For Chain-of-Thought Attack
Machine Learning (CS)
Protects smart AI from being tricked.
Steering in the Shadows: Causal Amplification for Activation Space Attacks in Large Language Models
Cryptography and Security
Makes AI do bad things by changing its thoughts.