Working memory facilitates reward-modulated Hebbian learning in recurrent neural networksDownload PDF

Published: 02 Oct 2019, Last Modified: 14 Apr 2024Real Neurons & Hidden Units @ NeurIPS 2019 PosterReaders: Everyone
TL;DR: We show that a working memory input to a reservoir network makes a local reward-modulated Hebbian rule perform as well as recursive least-squares (aka FORCE)
Keywords: reservoir networks, recurrent neural networks, local rules, Hebbian rules, continuous attractors
Abstract: Reservoir computing is a powerful tool to explain how the brain learns temporal sequences, such as movements, but existing learning schemes are either biologically implausible or too inefficient to explain animal performance. We show that a network can learn complicated sequences with a reward-modulated Hebbian learning rule if the network of reservoir neurons is combined with a second network that serves as a dynamic working memory and provides a spatio-temporal backbone signal to the reservoir. In combination with the working memory, reward-modulated Hebbian learning of the readout neurons performs as well as FORCE learning, but with the advantage of a biologically plausible interpretation of both the learning rule and the learning paradigm.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:1910.10559/code)
5 Replies

Loading