Training Binarized Neural Networks using Ternary MultipliersOpen Website

28 Apr 2021OpenReview Archive Direct UploadReaders: Everyone
Abstract: Deep learning offers the promise of intelligent devices that are able to perceive, reason and take intuitive actions. The rising adoption of deep learning techniques has motivated researchers and developers to seek low-cost and high-speed software/hardware solutions for deployment on smart devices. In recent years work has focused on reducing the complexity of inference in deep neural networks, however, a method enabling low-complexity on-chip learning is still missing. In this work, we introduce a gradient estimation method that performs back-propagation with ternary (2-bit) quantized gradients. Our method replaces all full-precision multipliers (i.e., 16-bit fixed-point multipliers) in neural network training with ternary operators while maintaining a comparable accuracy. Furthermore, we propose a new stochastic computing-based neural network (SC-based NN) which employs a new stochastic representation (i.e., dynamic sign-magnitude stochastic sequence). Our proposed SC-based NN produces state-of-the-art results while using shorter sequence lengths (i.e., sequence length of 16) compared to its SC-based counterparts.
0 Replies

Loading