Kernel RNN Learning (KeRNL)Download PDF

27 Sept 2018, 22:37 (edited 10 Feb 2022)ICLR 2019 Conference Blind SubmissionReaders: Everyone
  • Keywords: RNNs, Biologically plausible learning rules, Algorithm, Neural Networks, Supervised Learning
  • TL;DR: A biologically plausible learning rule for training recurrent neural networks
  • Abstract: We describe Kernel RNN Learning (KeRNL), a reduced-rank, temporal eligibility trace-based approximation to backpropagation through time (BPTT) for training recurrent neural networks (RNNs) that gives competitive performance to BPTT on long time-dependence tasks. The approximation replaces a rank-4 gradient learning tensor, which describes how past hidden unit activations affect the current state, by a simple reduced-rank product of a sensitivity weight and a temporal eligibility trace. In this structured approximation motivated by node perturbation, the sensitivity weights and eligibility kernel time scales are themselves learned by applying perturbations. The rule represents another step toward biologically plausible or neurally inspired ML, with lower complexity in terms of relaxed architectural requirements (no symmetric return weights), a smaller memory demand (no unfolding and storage of states over time), and a shorter feedback time.
  • Data: [MNIST](https://paperswithcode.com/dataset/mnist)
7 Replies

Loading