Similarity-preserving Neural Networks from GPLVM and Information TheoryDownload PDF

02 Oct 2022, 19:07 (modified: 21 Nov 2022, 07:04)InfoCog @ NeurIPS 2022 PosterReaders: Everyone
Keywords: Neural networks, Biologically plausible, Hebbian Learning, GPLVM, Similarity matching, Information Theory
TL;DR: This work proposes a novel neural network derived from GPLVM, grounded in Information theory.
Abstract: This work proposes a way of deriving the structure of plausible canonical microcircuit models, replete with feedforward, lateral, and feedback connections, out of information-theoretic considerations. The resulting circuits show biologically plausible features, such as being trainable online and having local synaptic update rules reminiscent of the Hebbian principle. Our work achieves these goals by rephrasing Gaussian Process Latent Variable Models as a special case of the more recently developed similarity matching framework. One remarkable aspect of the resulting network is the role of lateral interactions in preventing overfitting. Overall, our study emphasizes the importance of recurrent connections in neural networks, both for cognitive tasks in the brain and applications to artificial intelligence.
In-person Presentation: yes
0 Replies

Loading