Multi-Graph Meta-Transformer: An Interpretable Framework for Cross-Graph Functional Alignment in Neural Decoding
Track: Track 1: Original Research/Position/Education/Attention Track
Keywords: Multi-Graph Learning, Cross-Graph Functional Alignment, Nonspatial Sequence Memory Tasks, Electrophysiological Experiments
Abstract: Neuroscience experiments often involve capturing brain signals from heterogeneous individuals, each with unique neural dynamics, even in response to the exact same stimuli. This subject-to-subject variability makes it exceptionally challenging to aggregate data and extract common neural patterns across individuals. For graph-based models in particular, this challenge is amplified, as differences in brain connectivity and structure make it difficult to define a consistent and interpretable graph representation. To address this issue, we propose the Multi-Graph Meta-Transformer (MGMT), a unified framework that operates on a set of graphs that share a single prediction target, while respecting graph-specific structures. MGMT captures instance-level patterns, aligns their structural representations in a shared latent space, and integrates them to learn robust and generalizable structure. We apply this framework to uncover neural mechanisms underlying memory by analyzing hippocampal local field potentials (LFPs) recorded from five rats performing an odor–sequence task across multiple trials (instances). Each rat has a distinct graph with varying nodes and topology. For each trial, MGMT first applies task-supervised, depth-aware Graph Transformer encoders to each graph and extracts “supernodes” via learned attention. It then builds a meta-graph by retaining intra-graph edges and adding inter-graph “superedges” only between supernodes with high similarity in the learned embedding space. As a result, message passing is restricted to functionally aligned pairs. That is, information propagates along nodes with strong connections and is largely blocked between dissimilar nodes, reducing cross-graph noise and preventing “bad mixing.” Conceptually, MGMT reframes graph fusion as functional alignment, borrowing statistical power by linking regions that exhibit similar patterns across graphs. In doing so, MGMT yields more accurate and interpretable graph-level predictions. In our memory experiment, MGMT outperforms models based on single subjects (i.e., single graphs) as well as several existing graph fusion strategies. Additionally, it uncovers distal CA1 selectivity for non-spatial information processing and demonstrates that similarity-based superedges capture interpretable brain dynamics. We also highlight how MGMT can be readily used with multimodal data (different measurement channels) and multiview settings (different graph constructions from the same measurements), illustrating its flexibility across various experimental designs.
Submission Number: 425
Loading