Keywords: VAE, generative models, biological vision, neuroscience
TL;DR: We introduce a data-efficient way to adapt VAE priors, allowing us to explain both passive viewing and task-driven neuronal activity in mouse visual cortex.
Abstract: The brain interprets visual information through learned regularities, formalized as performing probabilistic inference under a prior. The visual cortex establishes priors for this inference, some of which are established through higher level representations as contextual priors and rely on widely documented top-down connections. While evidence supports that priors are acquired for natural images, it remains unclear if similar separate priors can be flexibly acquired for more specific computations, e.g. when learning a task. To investigate this in neural recordings, we built a generative model trained jointly on natural images and on a simple task, and analyzed it along with large-scale recordings from the early visual cortex of mice. For this, we extended the standard VAE formalism to flexibly and data-efficiently acquire a task such that it reuses representations learned in a task-agnostic manner. The resulting Task-Amortized VAE was used to investigate biases when presenting stimuli that violated the trained task statistics. Such mismatches between the learned task statistics and the incoming sensory evidence resulted in multimodal response profiles, which were also observed in the calcium imaging data from mice performing an analogous task. The task-optimized generative model could account for various characteristics of V1 population activity, including within-day updates to the population responses. Our results confirm that flexible task-specific contextual priors can be learned on-demand by the visual cortex and can be deployed as early as the entry
level of the visual cortex, the V1.
Supplementary Material: zip
Primary Area: applications to neuroscience & cognitive science
Submission Number: 20653
Loading