Score: 1

ManifoldFormer: Geometric Deep Learning for Neural Dynamics on Riemannian Manifolds

Published: November 20, 2025 | arXiv ID: 2511.16828v1

By: Yihang Fu, Lifang He, Qingyu Chen

Potential Business Impact:

Helps brain signals show patterns better.

Business Areas:
Image Recognition Data and Analytics, Software

Existing EEG foundation models mainly treat neural signals as generic time series in Euclidean space, ignoring the intrinsic geometric structure of neural dynamics that constrains brain activity to low-dimensional manifolds. This fundamental mismatch between model assumptions and neural geometry limits representation quality and cross-subject generalization. ManifoldFormer addresses this limitation through a novel geometric deep learning framework that explicitly learns neural manifold representations. The architecture integrates three key innovations: a Riemannian VAE for manifold embedding that preserves geometric structure, a geometric Transformer with geodesic-aware attention mechanisms operating directly on neural manifolds, and a dynamics predictor leveraging neural ODEs for manifold-constrained temporal evolution. Extensive evaluation across four public datasets demonstrates substantial improvements over state-of-the-art methods, with 4.6-4.8% higher accuracy and 6.2-10.2% higher Cohen's Kappa, while maintaining robust cross-subject generalization. The geometric approach reveals meaningful neural patterns consistent with neurophysiological principles, establishing geometric constraints as essential for effective EEG foundation models.

Page Count
5 pages

Category
Computer Science:
Machine Learning (CS)