Latent Graph Learning in Generative Models of Neural Signals
By: Nathan X. Kodama, Kenneth A. Loparo
Potential Business Impact:
Helps understand brain connections from brain signals.
Inferring temporal interaction graphs and higher-order structure from neural signals is a key problem in building generative models for systems neuroscience. Foundation models for large-scale neural data represent shared latent structures of neural signals. However, extracting interpretable latent graph representations in foundation models remains challenging and unsolved. Here we explore latent graph learning in generative models of neural signals. By testing against numerical simulations of neural circuits with known ground-truth connectivity, we evaluate several hypotheses for explaining learned model weights. We discover modest alignment between extracted network representations and the underlying directed graphs and strong alignment in the co-input graph representations. These findings motivate paths towards incorporating graph-based geometric constraints in the construction of large-scale foundation models for neural data.
Similar Papers
Unified Generative Latent Representation for Functional Brain Graphs
Neurons and Cognition
Lets computers understand brain activity patterns.
Self-Supervised Discovery of Neural Circuits in Spatially Patterned Neural Responses with Graph Neural Networks
Neurons and Cognition
Helps map brain connections from brain signals.
Graph signal aware decomposition of dynamic networks via latent graphs
Signal Processing
Finds hidden patterns in changing networks.