Leveraging Manifold Embeddings for Enhanced Graph Transformer Representations and Learning
By: Ankit Jyothish, Ali Jannesari
Potential Business Impact:
Helps computers understand complex networks better.
Graph transformers typically embed every node in a single Euclidean space, blurring heterogeneous topologies. We prepend a lightweight Riemannian mixture-of-experts layer that routes each node to various kinds of manifold, mixture of spherical, flat, hyperbolic - best matching its local structure. These projections provide intrinsic geometric explanations to the latent space. Inserted into a state-of-the-art ensemble graph transformer, this projector lifts accuracy by up to 3% on four node-classification benchmarks. The ensemble makes sure that both euclidean and non-euclidean features are captured. Explicit, geometry-aware projection thus sharpens predictive power while making graph representations more interpretable.
Similar Papers
ManifoldFormer: Geometric Deep Learning for Neural Dynamics on Riemannian Manifolds
Machine Learning (CS)
Helps brain signals show patterns better.
The Origins of Representation Manifolds in Large Language Models
Machine Learning (CS)
Helps AI understand ideas by seeing how they connect.
Latent Manifold Reconstruction and Representation with Topological and Geometrical Regularization
Machine Learning (CS)
Finds hidden patterns in messy data.