Keywords: multi-region neural recordings, shared/private disentanglement, transformer sequence models, coupled autoencoders, latent variable dynamics, Neuropixels, neural dynamics, representation learning
TL;DR: We introduce a coupled transformer autoencoder that separates shared from private neural dynamics in simultaneous multi-area neuronal recordings.
Abstract: Simultaneous recordings from thousands of neurons across multiple brain areas reveal rich mixtures of activity that are shared between regions and dynamics that are unique to each region. Existing alignment or multi-view methods neglect temporal structure, whereas dynamical latent-variable models capture temporal dependencies but are usually restricted to a single area, assume linear read-outs, or conflate shared and private signals. We introduce Coupled Transformer Autoencoder (CTAE)—a sequence model that addresses both (i) non-stationary, non-linear dynamics and (ii) separation of shared versus region-specific structure, in a single framework. CTAE employs Transformer encoders and decoders to capture long-range neural dynamics, and explicitly partitions each region’s latent space into orthogonal shared and private subspaces. We demonstrate the effectiveness of CTAE on a controlled synthetic dataset and two high-density electrophysiology datasets of simultaneous recordings from multiple regions, one from motor cortical areas and the other from sensory areas. CTAE extracts meaningful representations that better decode behavior variables compared to existing approaches.
Primary Area: applications to neuroscience & cognitive science
Submission Number: 8712
Loading