Mitigating Memorization in LLMs using Activation Steering
By: Manan Suri, Nishit Anand, Amisha Bhaskar
Potential Business Impact:
Stops AI from remembering and sharing private info.
The memorization of training data by Large Language Models (LLMs) poses significant risks, including privacy leaks and the regurgitation of copyrighted content. Activation steering, a technique that directly intervenes in model activations, has emerged as a promising approach for manipulating LLMs. In this work, we explore the effectiveness of activation steering in reducing memorization while preserving generalization capabilities. We conduct empirical evaluations using a controlled memorization benchmark of literary material and demonstrate that our method successfully suppresses memorized content with minimal degradation in model performance in Gemma. Additionally, we analyze the trade-offs between suppression effectiveness and linguistic fluency, highlighting the advantages and limitations of activation-based interventions. Our findings contribute to ongoing efforts in developing safer and more privacy-preserving LLMs by providing a practical and efficient mechanism to mitigate unintended memorization.
Similar Papers
Mitigating Content Effects on Reasoning in Language Models through Fine-Grained Activation Steering
Artificial Intelligence
Makes AI think more logically, less based on guesses.
Activation Steering for Bias Mitigation: An Interpretable Approach to Safer LLMs
Artificial Intelligence
Fixes AI to stop saying unfair or wrong things.
Steerable Chatbots: Personalizing LLMs with Preference-Based Activation Steering
Human-Computer Interaction
Lets AI understand your hidden feelings better.