Learning Time-Scale Invariant Population-Level Neural Representations

Published: 23 Sept 2025, Last Modified: 24 Nov 2025NeurIPS 2025 Workshop BrainBodyFMEveryoneRevisionsBibTeXCC BY 4.0
Keywords: neural time series, representation learning, time-scale invariance, self-supervised learning
TL;DR: We propose Time-scale Augmented Pretraining (TSAP), a strategy that improves robustness of population-level neural representations to preprocessing mismatches in time-scale, enabling greater generalization to variable input lengths.
Abstract: General-purpose foundation models for neural time series can help accelerate neuroscientific discoveries and enable applications such as brain computer interfaces (BCIs). A key component in scaling these models is population-level representation learning, which leverages information across channels to capture spatial as well as temporal structure. Population-level approaches have recently shown that such representations can be both efficient to learn on top of pretrained temporal encoders and produce useful representations for decoding a variety of downstream tasks. However, these models remain sensitive to mismatches in preprocessing, particularly on time-scales, between pretraining and downstream settings. We systematically examine how time-scale mismatches affects generalization and find that existing representations lack invariance. To address this, we introduce Time-scale Augmented Pretraining (TSAP), which consistently improves robustness to different time-scales across decoding tasks and builds invariance in the representation space. These results highlight handling preprocessing diversity as a key step toward building generalizable neural foundation models.
Submission Number: 61
Loading