Keywords: biologically plausible algorithm, backward locking problem, biological inspired algorithm, target propagation
TL;DR: We propose counter-current learning, a biologically inspired dual network architecture that facilitates local learning and addresses weight transport, non-local credit assignment, and backward locking issues in backpropagation.
Abstract: Despite its widespread use in neural networks, error backpropagation has faced criticism for its lack of biological plausibility, suffering from issues such as the backward locking problem and the weight transport problem.
These limitations have motivated researchers to explore more biologically plausible learning algorithms that could potentially shed light on how biological neural systems adapt and learn.
Inspired by the counter-current exchange mechanisms observed in biological systems, we propose counter-current learning (CCL), a biologically plausible framework for credit assignment in deep learning.
This framework employs a feedforward network to process input data and a feedback network to process targets, with each network enhancing the other through anti-parallel signal propagation.
By leveraging the more informative signals from the bottom layer of the feedback network to guide the updates of the top layer of the feedforward network and vice versa, CCL enables the simultaneous transformation of source inputs to target outputs and the dynamic mutual influence of these transformations.
Experimental results on MNIST, FashionMNIST, CIFAR10, CIFAR100, and STL-10 datasets using multi-layer perceptrons and convolutional neural networks demonstrate that CCL achieves comparable performance to other biological plausible algorithms while offering a more biologically realistic learning mechanism.
Furthermore, we showcase the applicability of our approach to an autoencoder task, underscoring its potential for unsupervised representation learning.
Our work presents a promising direction for biologically inspired and plausible learning algorithms, offering insights into the mechanisms of learning and adaptation in neural networks.
Supplementary Material: zip
Primary Area: Deep learning architectures
Submission Number: 2266
Loading