Towards Biologically Plausible Learning By Stacking Circular Autoencoders

NLDL 2025 Conference Submission14 Authors

29 Aug 2024 (modified: 16 Apr 2025)Submitted to NLDL 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: biologically plausible architectures, self-supervised learning, autoencoders, recirculation, local learning, tourbillon, feedback alignment, forward forward, target, target propagation
TL;DR: We logically derive and test a biologically plausible learning architecture which can be trained in self-supervised mode without the need for backpropagation.encoders and training them asynchronously using recirculation algorithms.
Abstract: Training deep neural networks in biological systems is faced with major challenges such as scarce labeled data and obstacles for propagating error signals in the absence of symmetric connections. We introduce Tourbillon, a new architecture that uses circular autoencoders trained with various recirculation algorithms in a self-supervised mode, with an optional top layer for classification or regression. Tourbillon is designed to address biological learning constraints rather than enhance existing engineering applications. Preliminary experiments on small benchmark datasets (MNIST, Fashion MNIST, CIFAR10) show that Tourbillon performs comparably to models trained with backpropagation and may outperform other biologically plausible approaches. The code and models are available at \url{https://anonymous.4open.science/r/Circular-Learning-4E1F}.
Submission Number: 14
Loading