Quantization Error as a Metric for Dynamic Precision Scaling in Neural Net TrainingDownload PDF

12 Feb 2018 (modified: 05 May 2023)ICLR 2018 Workshop SubmissionReaders: Everyone
Abstract: Recent work has explored reduced numerical precision for parameters, activations, and gradients during neural network training as a way to reduce the computational cost of training. We present a novel dynamic precision scaling (DPS) scheme. Using stochastic fixed-point rounding, a quantization-error based scaling scheme, and dynamic bit-widths during training, we achieve 98.8\% test accuracy on the MNIST dataset using an average bit-width of just 16 bits for weights and 14 bits for activations, compared to the standard 32-bit floating point values used in deep learning frameworks.
Keywords: dynamic precision scaling, acceleration, training, reduced precision
TL;DR: We propose a dynamic precision scaling algorithm that uses information about the quantization error to reduce the bitwidth of parameters, gradients, and activations during training without hurting accuracy.
4 Replies

Loading