Score: 1

Scalable Heterogeneous Graph Learning via Heterogeneous-aware Orthogonal Prototype Experts

Published: January 9, 2026 | arXiv ID: 2601.05537v1

By: Wei Zhou , Hong Huang , Ruize Shi and more

Potential Business Impact:

Lets computers understand tricky information better.

Business Areas:
A/B Testing Data and Analytics

Heterogeneous Graph Neural Networks(HGNNs) have advanced mainly through better encoders, yet their decoding/projection stage still relies on a single shared linear head, assuming it can map rich node embeddings to labels. We call this the Linear Projection Bottleneck: in heterogeneous graphs, contextual diversity and long-tail shifts make a global head miss fine semantics, overfit hub nodes, and underserve tail nodes. While Mixture-of-Experts(MoE) could help, naively applying it clashes with structural imbalance and risks expert collapse. We propose a Heterogeneous-aware Orthogonal Prototype Experts framework named HOPE, a plug-and-play replacement for the standard prediction head. HOPE uses learnable prototype-based routing to assign instances to experts by similarity, letting expert usage follow the natural long-tail distribution, and adds expert orthogonalization to encourage diversity and prevent collapse. Experiments on four real datasets show consistent gains across SOTA HGNN backbones with minimal overhead.

Country of Origin
🇨🇳 China

Page Count
11 pages

Category
Computer Science:
Machine Learning (CS)