Impression learning: Online representation learning with synaptic plasticityDownload PDF

May 21, 2021 (edited Oct 25, 2021)NeurIPS 2021 PosterReaders: Everyone
  • Keywords: computational neuroscience, probabilistic computation, synaptic plasticity, neural sampling, Wake-Sleep
  • TL;DR: We derive an unsupervised local synaptic plasticity rule that trains neural circuits online to infer latent structure from time-varying sensory stimuli.
  • Abstract: Understanding how the brain constructs statistical models of the sensory world remains a longstanding challenge for computational neuroscience. Here, we derive an unsupervised local synaptic plasticity rule that trains neural circuits to infer latent structure from sensory stimuli via a novel loss function for approximate online Bayesian inference. The learning algorithm is driven by a local error signal computed between two factors that jointly contribute to neural activity: stimulus drive and internal predictions --- the network's 'impression' of the stimulus. Physiologically, we associate these two components with the basal and apical dendrites of pyramidal neurons, respectively. We show that learning can be implemented online, is capable of capturing temporal dependencies in continuous input streams, and generalizes to hierarchical architectures. Furthermore, we demonstrate both analytically and empirically that the algorithm is more data-efficient than a three-factor plasticity alternative, enabling it to learn statistics of high-dimensional, naturalistic inputs. Overall, the model provides a bridge from mechanistic accounts of synaptic plasticity to algorithmic descriptions of unsupervised probabilistic learning and inference.
  • Supplementary Material: pdf
  • Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
  • Code: https://github.com/colinbredenberg/Impression-Learning-Camera-Ready
11 Replies

Loading