Don't Forget It! Conditional Sparse Autoencoder Clamping Works for Unlearning
By: Matthew Khoriaty , Andrii Shportko , Gustavo Mercier and more
Potential Business Impact:
Teaches AI to forget dangerous knowledge.
Recent developments in Large Language Model (LLM) capabilities have brought great potential but also posed new risks. For example, LLMs with knowledge of bioweapons, advanced chemistry, or cyberattacks could cause violence if placed in the wrong hands or during malfunctions. Because of their nature as near-black boxes, intuitive interpretation of LLM internals remains an open research question, preventing developers from easily controlling model behavior and capabilities. The use of Sparse Autoencoders (SAEs) has recently emerged as a potential method of unraveling representations of concepts in LLMs internals, and has allowed developers to steer model outputs by directly modifying the hidden activations. In this paper, we use SAEs to identify unwanted concepts from the Weapons of Mass Destruction Proxy (WMDP) dataset within gemma-2-2b internals and use feature steering to reduce the model's ability to answer harmful questions while retaining its performance on harmless queries. Our results bring back optimism to the viability of SAE-based explicit knowledge unlearning techniques.
Similar Papers
SAEs $\textit{Can}$ Improve Unlearning: Dynamic Sparse Autoencoder Guardrails for Precision Unlearning in LLMs
Machine Learning (CS)
Makes AI forget bad information safely and quickly.
Sparse-Autoencoder-Guided Internal Representation Unlearning for Large Language Models
Computation and Language
Makes AI forget specific information completely.
CRISP: Persistent Concept Unlearning via Sparse Autoencoders
Computation and Language
Removes bad ideas from AI, keeps good ones.