Brain-Inspired Architectures for Efficient and Meaningful Learning from Temporally Smooth DataDownload PDF

Anonymous

09 Oct 2020 (modified: 05 May 2023)Submitted to SVRHM@NeurIPSReaders: Everyone
Keywords: biologically plausible, incremental learning, leaky integrator, multiscale, hierarchical processing, timescales
Abstract: How can learning systems exploit the temporal smoothness of real-world training data? We tested the learning of neural networks equipped with two architectural features inspired by the temporal properties of neural circuits. First, because brain dynamics are correlated over time, we implemented a leaky memory mechanism in the hidden representations of neural networks. Second, because cortical circuits can rapidly shift their internal state, “resetting” their local memory, we implemented a gating mechanism that could reset the leaky memory. How do these architectural features affect learning efficiency and how do they affect the representations that are learned by neural networks? We found that networks equipped with leaky memory and gating could exploit the temporal smoothness in training data, surpassing the performance of conventional feedforward networks. Moreover, networks with multi-scale leaky memory and gating could learn internal representations that “un-mixed” data sources which vary on fast and slow timescales across training samples. Altogether, we showed that brain-inspired architectural mechanisms enabled neural networks to learn more efficiently from temporally smooth data, and to generate internal representations that separate timescales in the training signal.
1 Reply

Loading