Mechanistic Interpretability of Antibody Language Models Using SAEs
By: Rebonto Haque , Oliver M. Turnbull , Anisha Parsan and more
Sparse autoencoders (SAEs) are a mechanistic interpretability technique that have been used to provide insight into learned concepts within large protein language models. Here, we employ TopK and Ordered SAEs to investigate an autoregressive antibody language model, p-IgGen, and steer its generation. We show that TopK SAEs can reveal biologically meaningful latent features, but high feature concept correlation does not guarantee causal control over generation. In contrast, Ordered SAEs impose an hierarchical structure that reliably identifies steerable features, but at the expense of more complex and less interpretable activation patterns. These findings advance the mechanistic interpretability of domain-specific protein language models and suggest that, while TopK SAEs are sufficient for mapping latent features to concepts, Ordered SAEs are preferable when precise generative steering is required.
Similar Papers
ProtSAE: Disentangling and Interpreting Protein Language Models via Semantically-Guided Sparse Autoencoders
Quantitative Methods
Finds protein secrets in computer models.
Group Equivariance Meets Mechanistic Interpretability: Equivariant Sparse Autoencoders
Machine Learning (CS)
Finds hidden patterns in data using math.
Sparse Autoencoders are Topic Models
CV and Pattern Recognition
Finds hidden themes in pictures and words.