Latent Graph Learning in Generative Models of Neural Signals

Published: 23 Sept 2025, Last Modified: 18 Oct 2025NeurIPS 2025 Workshop BrainBodyFMEveryoneRevisionsBibTeXCC BY 4.0
Keywords: biophysical interpretability, latent graph learning, neuroformer, generative models, neural connectivity, graph representations
TL;DR: Generative models of neural signals show strong alignment in higher-order co-input graph representations, suggesting that generative models implicitly learn latent graph representations even when edge-level representations are noisy.
Abstract: Inferring temporal interaction graphs and higher-order structure from neural signals is a key problem in building generative models for systems neuroscience. Foundation models for large-scale neural data represent shared latent structures of neural signals. However, extracting interpretable latent graph representations in foundation models remains challenging and unsolved. Here we explore latent graph learning in generative models of neural signals. By testing against numerical simulations of neural circuits with known ground-truth connectivity, we evaluate several hypotheses for explaining learned model weights. We discover modest alignment between extracted network representations and the underlying directed graphs and strong alignment in the co-input graph representations. These findings motivate paths towards incorporating graph-based geometric constraints in the construction of large-scale foundation models for neural data.
Submission Number: 14
Loading