ProtSAE: Disentangling and Interpreting Protein Language Models via Semantically-Guided Sparse Autoencoders
By: Xiangyu Liu , Haodi Lei , Yi Liu and more
Potential Business Impact:
Finds protein secrets in computer models.
Sparse Autoencoder (SAE) has emerged as a powerful tool for mechanistic interpretability of large language models. Recent works apply SAE to protein language models (PLMs), aiming to extract and analyze biologically meaningful features from their latent spaces. However, SAE suffers from semantic entanglement, where individual neurons often mix multiple nonlinear concepts, making it difficult to reliably interpret or manipulate model behaviors. In this paper, we propose a semantically-guided SAE, called ProtSAE. Unlike existing SAE which requires annotation datasets to filter and interpret activations, we guide semantic disentanglement during training using both annotation datasets and domain knowledge to mitigate the effects of entangled attributes. We design interpretability experiments showing that ProtSAE learns more biologically relevant and interpretable hidden features compared to previous methods. Performance analyses further demonstrate that ProtSAE maintains high reconstruction fidelity while achieving better results in interpretable probing. We also show the potential of ProtSAE in steering PLMs for downstream generation tasks.
Similar Papers
Towards Interpretable Protein Structure Prediction with Sparse Autoencoders
Biomolecules
Shows how protein code builds body parts.
Evaluating Sparse Autoencoders for Monosemantic Representation
Machine Learning (CS)
Makes AI understand ideas more clearly.
AlignSAE: Concept-Aligned Sparse Autoencoders
Machine Learning (CS)
Lets AI understand and change specific ideas easily.