Abstract: Memristor Neural Networks (MNNs) stand out for their low power consumption and accelerated matrix operations, making them a promising hardware solution for neural network implementations. The efficacy of MNNs is significantly influenced by the careful selection of memristor update thresholds and the in-situ update scheme during hardware deployment. This paper addresses these critical aspects through the introduction of a novel scheme that integrates Dynamic Threshold (DT) and Gradient Accumulation (GA) with Threshold Properties. In this paper, realistic memristor characteristics, including pulse-to-pulse (P2P) and device-to-device (D2D) behaviors, were simulated by introducing random noise to the Vteam memristor model. A dynamic threshold scheme is proposed to enhance in-situ training accuracy, leveraging the inherent characteristics of memristors. Furthermore, the accumulation of gradients during back propagation is employed to finely regulate memristor updates, contributing to an improved in-situ training accuracy. Experimental results demonstrate a significant enhancement in test accuracy using the DTGA scheme on the MNIST dataset (82.98% to 96.15%) and the Fashion-MNIST dataset (75.58% to 82.53%). Robustness analysis reveals the DTGA scheme’s ability to tolerate a random noise factor of 0.03 for the MNIST dataset and 0.02 for the Fashion-MNIST dataset, showcasing its reliability under varied conditions. Notably, in the Fashion-MNIST dataset, the DTGA scheme yields a 7% performance improvement accompanied by a corresponding 7% reduction in training time. This study affirms the efficiency and accuracy of the DTGA scheme, which proves adaptable beyond multilayer perceptron neural networks (MLP), offering a compelling solution for the hardware implementation of diverse neuromorphic systems.
Loading