Abstract: Backpropagation (BP) algorithm has played a significant role in the development of deep learning. However, there exist some limitations associated with this algorithm, such as getting stuck in local minima and experiencing vanishing/exploding gradients, which have led to questions about its biological plausibility. To address these limitations, alternative algorithms to backpropagation have been preliminarily explored, with the Forward–Forward (FF) algorithm being one of the most well-known. In this paper we propose a new learning framework for neural networks, namely Cascaded Forward (CaFo) algorithm, which does not rely on BP optimization as that in FF. Unlike FF, CaFo directly outputs label distributions at each cascaded block and waives the requirement of generating additional negative samples. Consequently, CaFo leads to a more efficient process at both training and testing stages. Moreover, in our CaFo framework each block can be trained in parallel, allowing easy deployment to parallel acceleration systems. The proposed method is evaluated on four public image classification benchmarks, and the experimental results illustrate significant improvement in prediction accuracy in comparison with recently proposed baselines. The code is available at: https://github.com/Graph-ZKY/CaFo.
Loading