Keywords: Brain-Inspired learning; memory-efficient learning; prospective configuration
Abstract: Multi-layer perceptron (MLP) training via backpropagation faces fundamental memory limitations that constrain deployment in resource-constrained environments such as edge devices.
We introduce Prospective Learning, a novel training paradigm inspired by biological prospective configuration mechanisms that replaces gradient-based optimization with direct algebraic weight computation.
By transforming weight updates into regularized least-squares optimization problems that can be solved analytically layer by layer, it eliminates the need for gradient storage and intermediate activation caching, significantly reducing resource consumption.
Meanwhile, it integrates brain-inspired sparse connectivity initialization and adaptive metaplasticity mechanisms, which support the framework from the aspects of infrastructure initialization and dynamic learning adjustment, respectively.
Experiments on the MNIST, CIFAR-10, and CIFAR-100 datasets show that Prospective Learning achieves competitive accuracy, reduces memory usage by up to 55\% compared with traditional backpropagation, and consistently outperforms existing backpropagation alternatives in memory efficiency.
This memory-computation trade-off is favorable for edge scenarios where memory constraints dominate.
For example, it achieves 95.44\% accuracy on MNIST using only 38.77MB of memory on edge devices, providing a viable solution for efficient MLP training on memory-constrained edge devices.
Our main code has been anonymously uploaded to \url{https://anonymous.4open.science/r/Prospective-Learning} without any author information.
Primary Area: optimization
Submission Number: 2726
Loading