Rethinking Sparse Autoencoders: Select-and-Project for Fairness and Control from Encoder Features Alone
By: Antonio Bărbălau , Cristian Daniel Păduraru , Teodor Poncu and more
Potential Business Impact:
Makes AI fairer by changing how it learns.
Sparse Autoencoders (SAEs) have proven valuable due to their ability to provide interpretable and steerable representations. Current debiasing methods based on SAEs manipulate these sparse activations presuming that feature representations are housed within decoder weights. We challenge this fundamental assumption and introduce an encoder-focused alternative for representation debiasing, contributing three key findings: (i) we highlight an unconventional SAE feature selection strategy, (ii) we propose a novel SAE debiasing methodology that orthogonalizes input embeddings against encoder weights, and (iii) we establish a performance-preserving mechanism during debiasing through encoder weight interpolation. Our Selection and Projection framework, termed S\&P TopK, surpasses conventional SAE usage in fairness metrics by a factor of up to 3.2 and advances state-of-the-art test-time VLM debiasing results by a factor of up to 1.8 while maintaining downstream performance.
Similar Papers
Distribution-Aware Feature Selection for SAEs
Machine Learning (CS)
Helps computers understand ideas better by picking key parts.
Sparse Autoencoders Trained on the Same Data Learn Different Features
Machine Learning (CS)
AI finds different "thinking parts" each time.
On the Theoretical Understanding of Identifiable Sparse Autoencoders and Beyond
Machine Learning (CS)
Unlocks AI's hidden thoughts for better understanding.