Keywords: RNN, replay, hippocampus, path integration, predictive, generative model, Langevin sampling, gradients, score, adaptation, underdamped, momentum, compression, exploration
TL;DR: Without inputs, path-integrating RNNs (like the hippocampus) generate “replay” activity. We describe this as non-stationary Langevin sampling, and examine existing and our own methods of biasing it via modified RNN dynamics.
Abstract: Biological neural networks (like the hippocampus) can internally generate "replay" resembling stimulus-driven activity.
Recent computational models of replay use noisy recurrent neural networks (RNNs) trained to path-integrate.
Replay in these networks has been described as Langevin sampling, but new modifiers of noisy RNN replay have surpassed this description.
We re-examine noisy RNN replay as sampling to understand or improve it in three ways:
(1) Under simple assumptions, we prove that the gradients replay activity should follow are time-varying and difficult to estimate, but readily motivate the use of hidden state leakage in RNNs for replay.
(2) We confirm that hidden state adaptation (negative feedback) encourages exploration in replay, but show that it incurs non-Markov sampling that also slows replay.
(3) We propose the first model of temporally compressed replay in noisy path-integrating RNNs through hidden state momentum, connect it to underdamped Langevin sampling and short-term facilitation, and show that, when combined with adaptation, it counters slowness while maintaining exploration.
We verify our findings via path-integration of 2D paths in T-maze and triangular environments and of high-dimensional paths of synthetic rat place cell activity.
Supplementary Material: zip
Primary Area: applications to neuroscience & cognitive science
Submission Number: 18751
Loading