Neuronal Learning Analysis using Cycle-Consistent Adversarial Networks

TMLR Paper388 Authors

25 Aug 2022 (modified: 28 Feb 2023)Rejected by TMLREveryoneRevisionsBibTeX
Abstract: Recent advances in neural imaging technologies enable high-quality recordings from hundreds of neurons over multiple days, with the potential to uncover how activity in neural circuits reshapes over learning. However, the complexity and dimensionality of population responses pose significant challenges for analysis. To cope with this problem, existing methods for studying neuronal adaptation and learning often impose strong assumptions on the data or model that may result in biased descriptions of the activity changes. In this work, we avoid such biases by developing a data-driven analysis method for revealing activity changes due to task learning. We use a variant of cycle-consistent adversarial networks to learn the unknown mapping from pre- to post-learning neuronal responses. To do so, we develop an end-to-end pipeline to preprocess, train, validate and interpret the unsupervised learning framework with calcium imaging data. We validate our method on two synthetic datasets with known ground-truth transformation, as well as on V1 recordings obtained from behaving mice, where the mice transition from novice to expert-level performance in a visual-based behavioral experiment. We show that our models can identify neurons and spatiotemporal activity patterns relevant to learning the behavioral task, in terms of subpopulations maximizing behavioral decoding performance and task characteristics not explicitly used for training the models. Together, our results demonstrate that analyzing neuronal learning processes with data-driven deep unsupervised methods can unravel activity changes in complex datasets.
Submission Length: Long submission (more than 12 pages of main content)
Changes Since Last Submission: N/A
Assigned Action Editor: ~Bertrand_Thirion1
Submission Number: 388
Loading