Capturing cross-session neural population variability through self-supervised identification of consistent neuron ensemblesDownload PDF

Published: 07 Nov 2022, Last Modified: 05 May 2023NeurReps 2022 PosterReaders: Everyone
Keywords: Manifold learning, Neuroscience, Self-supervised learning, Neural decoding, Neural population activity, Sequential autoencoders, Electrophysiology
TL;DR: We propose CAPTIVATE, a self-supervised approach for stable decoding of behaviour from distant unseen neural recording sessions.
Abstract: Decoding stimuli or behaviour from recorded neural activity is a common approach to interrogate brain function in research, and an essential part of brain-computer and brain-machine interfaces. Reliable decoding even from small neural populations is possible because high dimensional neural population activity typically occupies low dimensional manifolds that are discoverable with suitable latent variable models. Over time however, drifts in activity of individual neurons and instabilities in neural recording devices can be substantial, making stable decoding over days and weeks impractical. While this drift cannot be predicted on an individual neuron level, population level variations over consecutive recording sessions such as differing sets of neurons and varying permutations of consistent neurons in recorded data may be learnable when the underlying manifold is stable over time. Classification of consistent versus unfamiliar neurons across sessions and accounting for deviations in the order of consistent recording neurons across sessions of recordings may then maintain decoding performance and uncover a task-related neural manifold. Here we show that self-supervised training of a deep neural network can be used to compensate for this inter-session variability. As a result, a sequential autoencoding model can maintain state-of-the-art behaviour decoding performance for completely unseen recording sessions several days into the future. Our approach only requires a single recording session for training the model, and is a step towards reliable, recalibration-free brain computer interfaces.
4 Replies

Loading