Not All Sessions Are Equal: Data Selection for Multi-Session Pretraining in Neural Data Transformers
Keywords: neural foundation model, pretraining, scaling law, self-supervised learning, neural encoding, neural dynamics, brain-computer interfaces
TL;DR: We show that the benefits of pretraining multi-session neural data transformers are highly sensitive to data and session selection.
Abstract: A key challenge in analyzing neuroscience datasets is the profound variability they exhibit across sessions, animals, and data modalities. Several recent studies have demonstrated performance gains from pretraining neural foundation models on multi-session datasets, seemingly overcoming this challenge. However, these studies typically lack fine-grained data scaling analyses. It remains unclear whether all sessions contribute equally to downstream performance gains. In this work, we systematically investigate how cross-session variability impacts the scaling behavior of neural data transformers (NDTs) in neural activity prediction. We propose a session selection procedure based on single-session finetuning performances. Through this procedure, models pretrained on as few as five selected sessions outperformed those pretrained on the entire dataset of 84 sessions. Our findings challenge the direct applicability of traditional scaling laws to neural data and suggest that multi-session scaling benefits may need to be re-examined in the light of session-to-session variability. This work both highlights the importance of incremental data scaling analyses and suggests new avenues toward optimally selecting pretraining data when developing foundation models on large-scale neuroscience datasets.
Submission Number: 32
Loading