Keywords: Forward-Forward algorithm, Hebbian learning, Gradient-free optimization
TL;DR: HebbFF is a gradient-free variant of the Forward-Forward algorithm that replaces local gradients with Hebbian updates, matching FF’s accuracy while training faster and using less memory.
Abstract: We introduce Hebbian Forward-Forward (HebbFF), a gradient-free alternative to the Forward-Forward (FF) algorithm. HebbFF replaces the local gradient computations in FF with classical Hebbian plasticity, which is modulated by a gating rule based on the "goodness" introduced in FF. This change eliminates the need for gradient calculations, reducing computational overhead and memory usage. On the MNIST and FashionMNIST datasets, HebbFF achieves the same level of accuracy as FF while training substantially faster and using less memory. Although HebbFF achieves lower predictive performance compared to backpropagation, it offers more resource-efficient training. Therefore, HebbFF establishes a stronger baseline than FF for exploring scalable, gradient-free learning in deep networks.
Submission Number: 147
Loading