Abstract: Gradient-based optimization has been a cornerstone of machine learning that enabled the vast ad- vances of Artificial Intelligence (AI) development over the past decades. However, this type of optimization requires differentiation, and with recent evidence of the benefits of non-differentiable (e.g. neuromorphic) architectures over classical models w.r.t. efficiency, such constraints can be- come limiting in the future. We present Layer-wise Feedback Propagation (LFP), a novel training principle for neural network-like predictors that utilizes methods from the domain of explainability to decompose a reward to individual neurons based on their respective contributions. Leveraging these neuron-wise rewards, our method then implements a greedy approach reinforcing helpful parts of the network and weakening harmful ones. While having comparable computational complexity to gradient descent, LFP does not require gradient computation and generates sparse and thereby memory- and energy-efficient parameter updates and models. We establish the convergence of LFP theoretically and empirically, demonstrating its effectiveness on various models and datasets. Via two applications — neural network pruning and the approximation-free training of Spiking Neural Networks (SNNs) — we demonstrate that LFP combines increased efficiency in terms of computation and representation with flexibility w.r.t. choice of model architecture and objective function.
Submission Length: Long submission (more than 12 pages of main content)
Changes Since Last Submission: We made the following changes for the camera-ready version:
- Ensured all changes requested by the reviewers were included
- Extended experiments on SNNs to more complex architectures, and beyond MNIST (Section 4.2, Figures 10 and 11).
- Updated the discussion accordingly
- Updated the results on ViT (Appendix A.9) with a more optimal set of hyperparameters and a longer training time.
- Moved several Appendix Figures to their own Appendix Section (A.10 Additional Figures)
- Several minor reformulations and corrections of grammar/punctuation
Code: https://github.com/leanderweber/layerwise-feedback-propagation
Supplementary Material: zip
Assigned Action Editor: ~Yani_Ioannou1
Submission Number: 4172
Loading