Keywords: Forward-Forward Algorithm, Dynamic Network Architecture, Deep Learning, Representation Learning
Abstract: The Forward-Forward (FF) algorithm has emerged as a promising alternative to the traditional deep learning paradigm based on the backpropagation algorithm. However, both the original FF algorithm and several FF-based extensions rely on the quality of generated negative samples for training, which can limit their effectiveness.
In this paper, we design an FF-based algorithm for the classification task. Specifically, we propose the concept of support neuron (SN) sets by partitioning the neurons in each layer into several sets, each explicitly corresponding to a class. The SN set with the strongest response (goodness) determines the predicted class of the input, thereby eliminating the need for negative samples. Furthermore, inspired by the functioning of the brain, we introduce neuron growth and degeneration strategies: (1) when neurons fail to achieve satisfactory performance, new neurons can grow to assist; and (2) neurons that remain inactive across all classes may degenerate.
Extensive experiments demonstrate that our method achieves state-of-the-art performance on MNIST and CIFAR datasets compared to other FF-based approaches that also eliminate the use of negative samples. In addition, the effectiveness of the proposed neuron growth and degeneration mechanisms is empirically evaluated.
Supplementary Material: zip
Primary Area: interpretability and explainable AI
Submission Number: 9154
Loading