MMPG: MoE-based Adaptive Multi-Perspective Graph Fusion for Protein Representation Learning
By: Yusong Wang , Jialun Shen , Zhihao Wu and more
Graph Neural Networks (GNNs) have been widely adopted for Protein Representation Learning (PRL), as residue interaction networks can be naturally represented as graphs. Current GNN-based PRL methods typically rely on single-perspective graph construction strategies, which capture partial properties of residue interactions, resulting in incomplete protein representations. To address this limitation, we propose MMPG, a framework that constructs protein graphs from multiple perspectives and adaptively fuses them via Mixture of Experts (MoE) for PRL. MMPG constructs graphs from physical, chemical, and geometric perspectives to characterize different properties of residue interactions. To capture both perspective-specific features and their synergies, we develop an MoE module, which dynamically routes perspectives to specialized experts, where experts learn intrinsic features and cross-perspective interactions. We quantitatively verify that MoE automatically specializes experts in modeling distinct levels of interaction from individual representations, to pairwise inter-perspective synergies, and ultimately to a global consensus across all perspectives. Through integrating this multi-level information, MMPG produces superior protein representations and achieves advanced performance on four different downstream protein tasks.
Similar Papers
Bidirectional Hierarchical Protein Multi-Modal Representation Learning
Machine Learning (CS)
Helps predict how proteins work by combining two views.
GRAPHMOE: Amplifying Cognitive Depth of Mixture-of-Experts Network via Introducing Self-Rethinking Mechanism
Computation and Language
Lets computers "think" more to solve problems.
Mixture of Message Passing Experts with Routing Entropy Regularization for Node Classification
Machine Learning (CS)
Helps computers learn better from messy, connected data.