Toward Efficient Low-Precision Training: Data Format Optimization and Hysteresis QuantizationDownload PDF

Published: 28 Jan 2022, Last Modified: 13 Feb 2023ICLR 2022 PosterReaders: Everyone
Keywords: low-precision training, quantized training, logarithmic weight, data format optimization, hysteresis quantization
Abstract: As the complexity and size of deep neural networks continue to increase, low-precision training has been extensively studied in the last few years to reduce hardware overhead. Training performance is largely affected by the numeric formats representing different values in low-precision training, but finding an optimal format typically requires numerous training runs, which is a very time-consuming process. In this paper, we propose a method to efficiently find an optimal format for activations and errors without actual training. We employ this method to determine an 8-bit format suitable for training various models. In addition, we propose hysteresis quantization to suppress undesired fluctuation in quantized weights during training. This scheme enables deeply quantized training using 4-bit weights, exhibiting only 0.2% degradation for ResNet-18 trained on ImageNet.
One-sentence Summary: We propose a systematic data format optimization method and hysteresis quantization scheme to enable efficient low-precision training.
Supplementary Material: zip
29 Replies

Loading