Keywords: mixed-precision-training, quantization, deep-learning, transformers
TL;DR: The paper introduces a metric driven approach to selecting a low precision data type for ML training
Abstract: As deep learning methodologies have developed, it has been generally agreed that increasing neural network
size improves model quality. However, this is at the expense of memory and compute requirements, which also need to be
increased. Various efficiency techniques have been proposed to rein in hardware costs, one being the use of low precision numerics. Recent accelerators have introduced several different 8-bit data types to help accommodate DNNs in terms of numerics. In this paper, we identify a metric driven methodology to aid in the choice of numerics. We demonstrate how such a methodology can help scale training of a language representation model. The technique can be generalized to other model architectures.
Workshop Track: ASSYST
Presentation: In-Person
Presenter Full Name: Mitchelle Rasquinha
Presenter Email: mrasquinha@google.com
Presenter Bio: Mitchelle is a Senior Software Engineer working with system optimizations of Deep Learning models and frameworks.
3 Replies
Loading