Mixed Precision Training With 8-bit Floating PointDownload PDF

25 Sept 2019 (modified: 05 May 2023)ICLR 2020 Conference Blind SubmissionReaders: Everyone
TL;DR: We demonstrated state-of-the-art training results using 8-bit floating point representation, across Resnet, GNMT, Transformer.
Abstract: Reduced precision computation is one of the key areas addressing the widening’compute gap’, driven by an exponential growth in deep learning applications. In recent years, deep neural network training has largely migrated to 16-bit precision,with significant gains in performance and energy efficiency. However, attempts to train DNNs at 8-bit precision have met with significant challenges, because of the higher precision and dynamic range requirements of back-propagation. In this paper, we propose a method to train deep neural networks using 8-bit floating point representation for weights, activations, errors, and gradients. We demonstrate state-of-the-art accuracy across multiple data sets (imagenet-1K, WMT16)and a broader set of workloads (Resnet-18/34/50, GNMT, and Transformer) than previously reported. We propose an enhanced loss scaling method to augment the reduced subnormal range of 8-bit floating point, to improve error propagation.We also examine the impact of quantization noise on generalization, and propose a stochastic rounding technique to address gradient noise. As a result of applying all these techniques, we report slightly higher validation accuracy compared to full precision baseline.
Keywords: 8-bit training, 8-bit floating point, low precision training, deep learning
Original Pdf: pdf
11 Replies

Loading