Attribution-Guided Distillation of Matryoshka Sparse Autoencoders
By: Cristina P. Martin-Linares, Jonathan P. Ling
Sparse autoencoders (SAEs) aim to disentangle model activations into monosemantic, human-interpretable features. In practice, learned features are often redundant and vary across training runs and sparsity levels, which makes interpretations difficult to transfer and reuse. We introduce Distilled Matryoshka Sparse Autoencoders (DMSAEs), a training pipeline that distills a compact core of consistently useful features and reuses it to train new SAEs. DMSAEs run an iterative distillation cycle: train a Matryoshka SAE with a shared core, use gradient X activation to measure each feature's contribution to next-token loss in the most nested reconstruction, and keep only the smallest subset that explains a fixed fraction of the attribution. Only the core encoder weight vectors are transferred across cycles; the core decoder and all non-core latents are reinitialized each time. On Gemma-2-2B layer 12 residual stream activations, seven cycles of distillation (500M tokens, 65k width) yielded a distilled core of 197 features that were repeatedly selected. Training using this distilled core improves several SAEBench metrics and demonstrates that consistent sets of latent features can be transferred across sparsity levels
Similar Papers
Learning Multi-Level Features with Matryoshka Sparse Autoencoders
Machine Learning (CS)
Organizes AI learning into simple and complex ideas.
Enforcing Orderedness to Improve Feature Consistency
Machine Learning (CS)
Makes AI models' thinking more predictable and consistent.
Sparse Autoencoders Trained on the Same Data Learn Different Features
Machine Learning (CS)
AI finds different "thinking parts" each time.