A novel high performance in-situ training scheme for open-loop tuning of the memristor neural networks

Published: 01 Jan 2025, Last Modified: 18 Apr 2025Expert Syst. Appl. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Memristor neural networks are increasingly recognized for their suitability in matrix operations and low power consumption, offering a promising solution to overcome the memory-computing separation inherent in von-Neumann architectures. However, the non-ideal properties of memristors present challenges during synaptic updating, particularly for in-situ training. In this study, we address these challenges by designing an innovative in-situ training scheme that integrates layer-wise threshold(LW) and gradient accumulation(GA). This approach effectively enhances the training of memristor neural networks, achieving high classification accuracy. Our scheme utilizes open-loop tuning, significantly reducing the number of pulse inputs during synaptic updates, thereby lowering energy consumption and minimizing time delays. Experimental results demonstrate that integrating layer-wise threshold and gradient accumulation improves the classification accuracy of multilayer perceptrons (MLP) by 3.43%. Moreover, the scheme exhibits robustness against Gaussian noise up to a noise factor of 0.2. We further extend our approach to convolutional neural networks (CNN), achieving an 8.63% enhancement in classification performance. Both MLP and CNN trained with our scheme demonstrate rapid convergence. Furthermore, compared to closed-loop tuning, our open-loop tuning scheme substantially reduces pulse count during synaptic updates, resulting in significant energy savings and reduced latency. In conclusion, our proposed in-situ training scheme offers high accuracy, low power consumption, and fast convergence, marking a significant advancement for the future development of neuromorphic systems.
Loading