Abstract: Recurrent networks of spiking neurons (RSNNs) underlie the astounding comput-
ing and learning capabilities of the brain. But computing and learning capabilities
of RSNN models have remained poor, at least in comparison with ANNs. We
address two possible reasons for that. One is that RSNNs in the brain are not
randomly connected or designed according to simple rules, and they do not start
learning as a tabula rasa network. Rather, RSNNs in the brain were optimized for
their tasks through evolution, development, and prior experience. Details of these
optimization processes are largely unknown. But their functional contribution can
be approximated through powerful optimization methods, such as backpropaga-
tion through time (BPTT).
A second major mismatch between RSNNs in the brain and models is that the
latter only show a small fraction of the dynamics of neurons and synapses in
the brain. We include neurons in our RSNN model that reproduce one promi-
nent dynamical process of biological neurons that takes place at the behaviourally
relevant time scale of seconds: neuronal adaptation. We denote these networks
as LSNNs because of their Long short-term memory. The inclusion of adapting
neurons drastically increases the computing and learning capability of RSNNs if
they are trained and configured by deep learning (BPTT combined with a rewiring
algorithm that optimizes the network architecture). In fact, the computational per-
formance of these RSNNs approaches for the first time that of LSTM networks.
In addition RSNNs with adapting neurons can acquire abstract knowledge from
prior learning in a Learning-to-Learn (L2L) scheme, and transfer that knowledge
in order to learn new but related tasks from very few examples. We demonstrate
this for supervised learning and reinforcement learning.
0 Replies
Loading