Recursive Binary Neural Network Learning Model with 2-bit/weight Storage Requirement

Anonymous

Nov 03, 2017 (modified: Nov 03, 2017) ICLR 2018 Conference Blind Submission readers: everyone Show Bibtex
  • Abstract: This paper presents a storage-efficient learning model titled Recursive Binary Neural Networks for embedded and mobile devices having a limited amount of on-chip data storage such as hundreds of kilo-Bytes. The main idea of the proposed model is to recursively recycle data storage of synaptic weights (parameters) during training. This enables a device with a given storage constraint to train and instantiate a neural network classifier with a larger number of weights on a chip, achieving better classification accuracy. Such efficient use of on-chip storage reduces off-chip storage accesses, improving energy-efficiency and speed of training. We verified the proposed training model with deep neural network classifiers and the permutation-invariant MNIST benchmark. Our model achieves data storage requirement of as low as 2 bits/weight while the conventional binary neural network learning models require data storage of 8 to 16 bits/weight. With same amount of data storage, our model can train a bigger network having more weights, achieving ~1% better classification accuracy than the conventional binary neural network learning model. To achieve the similar classification error, the conventional binary neural network model requires 3-4× more data storage for weights than our proposed model.
  • TL;DR: We propose a learning model enabling DNN to learn with only 2 bit/weight, which is especially useful for on-device learning

Loading