Emergent representations in networks trained with the Forward-Forward algorithm

Published: 16 Jun 2024, Last Modified: 12 Jul 2024HiLD at ICML 2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Forward-Forward, Sparsity, Ensembles, Representations, Backpropagation
TL;DR: Neural networks trained with the Forward-Forward algorithm exhibit sparse representations that resemble those found in sensory cortex.
Abstract: Backpropagation has been criticised for its lack of biological realism. In this work, we show that the internal representations obtained by the Forward-Forward algorithm can organise spontaneously into category-specific ensembles exhibiting high sparsity. This situation is reminiscent of what has been observed in cortical sensory areas, where neuronal ensembles are suggested to serve as the functional building blocks for perception and action. Our findings suggest that the learning method used by Forward-Forward may be more biologically plausible than Backpropagation, particularly in terms of the emergent representations it produces.
Student Paper: Yes
Submission Number: 26
Loading