Scalable Heterogeneous Graph Learning via Heterogeneous-aware Orthogonal Prototype Experts
By: Wei Zhou , Hong Huang , Ruize Shi and more
Potential Business Impact:
Lets computers understand tricky information better.
Heterogeneous Graph Neural Networks(HGNNs) have advanced mainly through better encoders, yet their decoding/projection stage still relies on a single shared linear head, assuming it can map rich node embeddings to labels. We call this the Linear Projection Bottleneck: in heterogeneous graphs, contextual diversity and long-tail shifts make a global head miss fine semantics, overfit hub nodes, and underserve tail nodes. While Mixture-of-Experts(MoE) could help, naively applying it clashes with structural imbalance and risks expert collapse. We propose a Heterogeneous-aware Orthogonal Prototype Experts framework named HOPE, a plug-and-play replacement for the standard prediction head. HOPE uses learnable prototype-based routing to assign instances to experts by similarity, letting expert usage follow the natural long-tail distribution, and adds expert orthogonalization to encourage diversity and prevent collapse. Experiments on four real datasets show consistent gains across SOTA HGNN backbones with minimal overhead.
Similar Papers
Mixture of Message Passing Experts with Routing Entropy Regularization for Node Classification
Machine Learning (CS)
Helps computers learn better from messy, connected data.
Mixture of Decoupled Message Passing Experts with Entropy Constraint for General Node Classification
Machine Learning (CS)
Helps computers understand different kinds of networks better.
Adaptive Heterogeneous Graph Neural Networks: Bridging Heterophily and Heterogeneity
Machine Learning (CS)
Helps computers understand messy, connected information better.