Reduction of the Weight-Decay Rate of Volatile Memory Synapses in an Analog Hardware Neural Network for Accurate and Scalable On-Chip Learning
Abstract: Volatile Memory (VM) synapse, based on a conventional silicon transistor, has been proposed earlier for on-chip learning in a crossbar array based analog hardware Neural Network (NN). This is because VM synapse has more linear and symmetric synaptic characteristic compared to Non Volatile Memory (NVM) synapse, leading to higher speed and lower power consumption for on-chip learning. However, rapid weight-decay in such charge-based VM synapse can degrade the learning accuracy. In this paper, we have carried out the design and simulation of a VM-synapse-based crossbar array for a non-spiking NN such that the weight-decay rate is low. The extra transistors, associated with each VM synapse at the junction of two crossbars to form the Volatile Memory Synapse Cell (VMSC), are optimized for this purpose. To do so, we have modified the standard Gradient Descent algorithm, used for non-spiking NN, by incorporating thresholding functions in it. Through our optimized design and simulations, we show that even with a capacitance of VM synapse as low as 1.6 fF, the weight-decay rate of the VM synapse can be low enough (time constant ≈ 16 μs) such that the Fisher’s Iris and MNIST data-sets can be classified with high accuracy with such VM synapse based crossbar array, even in the presence of device-level variations. This reduction in capacitance value by three orders of magnitude compared to previous report, which needs 1–10 pF per VM synapse, also reduces the area-footprint per VM synapse. This makes VM-synapse-based on-chip learning scalable.
0 Replies
Loading