Learning to solve the credit assignment problemDownload PDF

Published: 02 Oct 2019, Last Modified: 21 Apr 2024Real Neurons & Hidden Units @ NeurIPS 2019 PosterReaders: Everyone
TL;DR: Perturbations can be used to learn feedback weights on large fully connected and convolutional networks.
Keywords: biologically plausible deep learning, feedback alignment, REINFORCE, node perturbation
Abstract: Backpropagation is driving today's artificial neural networks. However, despite extensive research, it remains unclear if the brain implements this algorithm. Among neuroscientists, reinforcement learning (RL) algorithms are often seen as a realistic alternative. However, the convergence rate of such learning scales poorly with the number of involved neurons. Here we propose a hybrid learning approach, in which each neuron uses an RL-type strategy to learn how to approximate the gradients that backpropagation would provide. We show that our approach learns to approximate the gradient, and can match the performance of gradient-based learning on fully connected and convolutional networks. Learning feedback weights provides a biologically plausible mechanism of achieving good performance, without the need for precise, pre-specified learning rules.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:1906.00889/code)
4 Replies

Loading