PLM-eXplain: Divide and Conquer the Protein Embedding Space
By: Jan van Eck , Dea Gogishvili , Wilson Silva and more
Potential Business Impact:
Explains how proteins work without losing accuracy.
Protein language models (PLMs) have revolutionised computational biology through their ability to generate powerful sequence representations for diverse prediction tasks. However, their black-box nature limits biological interpretation and translation to actionable insights. We present an explainable adapter layer - PLM-eXplain (PLM-X), that bridges this gap by factoring PLM embeddings into two components: an interpretable subspace based on established biochemical features, and a residual subspace that preserves the model's predictive power. Using embeddings from ESM2, our adapter incorporates well-established properties, including secondary structure and hydropathy while maintaining high performance. We demonstrate the effectiveness of our approach across three protein-level classification tasks: prediction of extracellular vesicle association, identification of transmembrane helices, and prediction of aggregation propensity. PLM-X enables biological interpretation of model decisions without sacrificing accuracy, offering a generalisable solution for enhancing PLM interpretability across various downstream applications. This work addresses a critical need in computational biology by providing a bridge between powerful deep learning models and actionable biological insights.
Similar Papers
Elucidating the Design Space of Multimodal Protein Language Models
Machine Learning (CS)
Helps computers understand protein shapes better.
PepTriX: A Framework for Explainable Peptide Analysis through Protein Language Models
Artificial Intelligence
Helps find new medicines by understanding tiny protein parts.
Understanding protein function with a multimodal retrieval-augmented foundation model
Quantitative Methods
Helps predict how tiny body parts will change.