Keywords: neuroscience, self-supervised learning, representation learning
TL;DR: We propose an SSL framework (PopulationTransformer) to learn population-level representations of intracranial activity across varied subject layouts.
Abstract: We present a self-supervised framework that learns population-level codes for arbitrary ensembles of neural recordings. We address key challenges in scaling models with neural time-series data, namely, sparse and variable electrode distribution across subjects and datasets. The Population Transformer (PopT) stacks on top of pretrained representations and enhances downstream decoding by enabling learned aggregation of multiple spatially-sparse data channels. The pretrained PopT lowers the amount of data required for downstream decoding experiments, while increasing accuracy, even on held-out subjects and tasks. Beyond decoding, we interpret the pretrained PopT and fine-tuned models to show how they can be used to extract neuroscience insights from massive amounts of data. We release our code as well as a pretrained PopT to enable off-the-shelf improvements in multi-channel intracranial data decoding and interpretability.
Submission Number: 76
Loading