Score: 0

Latent Graph Learning in Generative Models of Neural Signals

Published: August 22, 2025 | arXiv ID: 2508.16776v1

By: Nathan X. Kodama, Kenneth A. Loparo

Potential Business Impact:

Helps understand brain connections from brain signals.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Inferring temporal interaction graphs and higher-order structure from neural signals is a key problem in building generative models for systems neuroscience. Foundation models for large-scale neural data represent shared latent structures of neural signals. However, extracting interpretable latent graph representations in foundation models remains challenging and unsolved. Here we explore latent graph learning in generative models of neural signals. By testing against numerical simulations of neural circuits with known ground-truth connectivity, we evaluate several hypotheses for explaining learned model weights. We discover modest alignment between extracted network representations and the underlying directed graphs and strong alignment in the co-input graph representations. These findings motivate paths towards incorporating graph-based geometric constraints in the construction of large-scale foundation models for neural data.

Country of Origin
🇺🇸 United States

Page Count
12 pages

Category
Computer Science:
Machine Learning (CS)