ManifoldFormer: Geometric Deep Learning for Neural Dynamics on Riemannian Manifolds
By: Yihang Fu, Lifang He, Qingyu Chen
Potential Business Impact:
Helps brain signals show patterns better.
Existing EEG foundation models mainly treat neural signals as generic time series in Euclidean space, ignoring the intrinsic geometric structure of neural dynamics that constrains brain activity to low-dimensional manifolds. This fundamental mismatch between model assumptions and neural geometry limits representation quality and cross-subject generalization. ManifoldFormer addresses this limitation through a novel geometric deep learning framework that explicitly learns neural manifold representations. The architecture integrates three key innovations: a Riemannian VAE for manifold embedding that preserves geometric structure, a geometric Transformer with geodesic-aware attention mechanisms operating directly on neural manifolds, and a dynamics predictor leveraging neural ODEs for manifold-constrained temporal evolution. Extensive evaluation across four public datasets demonstrates substantial improvements over state-of-the-art methods, with 4.6-4.8% higher accuracy and 6.2-10.2% higher Cohen's Kappa, while maintaining robust cross-subject generalization. The geometric approach reveals meaningful neural patterns consistent with neurophysiological principles, establishing geometric constraints as essential for effective EEG foundation models.
Similar Papers
Geometry-Aware Deep Congruence Networks for Manifold Learning in Cross-Subject Motor Imagery
Machine Learning (CS)
Lets computers understand brain signals from anyone.
The Neural Differential Manifold: An Architecture with Explicit Geometric Structure
Machine Learning (CS)
Teaches computers to learn with better understanding.
Learning Geometry: A Framework for Building Adaptive Manifold Models through Metric Optimization
Machine Learning (CS)
Teaches computers to learn by changing their shape.