Quantized Back-Propagation: Training Binarized Neural Networks with Quantized GradientsDownload PDF

12 Feb 2018 (modified: 05 May 2023)ICLR 2018 Workshop SubmissionReaders: Everyone
Abstract: Binarized Neural networks (BNNs) have been shown to be effective in improving network efficiency during the inference phase, after the network has been trained. However, BNNs only binarize the model parameters and activations during propagations. We show there is no inherent difficulty in training BNNs using "Quantized BackPropagation" (QBP), in which we also quantized the error gradients and in the extreme case ternarize them. To avoid significant degradation in test accuracy, we apply stochastic ternarization and increase the number of filter maps in a each convolution layer. Using QBP has the potential to significantly improve the execution efficiency (\emph{e.g.}, reduce dynamic memory footprint and computational energy and speed up the training process, even after such an increase in network size.
Keywords: Neural Network Acceleration, Neural Network Compression
TL;DR: By quantizing only the sequential error gradients we can accelerate the DNNs training while receiving high accuracy results.
3 Replies

Loading