SLaNC: Static LayerNorm Calibration

Published: 17 Oct 2024, Last Modified: 17 Oct 2024MLNCP PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Large language models, LayerNorm, Quantization, Floating point, Static layerNorm calibration
TL;DR: This paper introduces the Static LayerNorm Calibration method, which computes a scaling factor for LayerNorm inputs in Transformers, using only the static weights of preceding linear layers. SLaNC helps prevent overflow and underflow in LayerNorms.
Abstract: The ever increasing sizes of Large Language Models (LLMs) beyond hundreds of billions of parameters have generated enormous pressure on the manufacturers of dedicated hardware accelerators and made the innovative design of the latter one of the most rapidly expanding fields of the AI industry. Various approaches have been explored to enable efficient and accurate processing of LLMs on the available accelerators given their computational and storage limitations. Among these, various quantization techniques have become the main focus of the community as a means of reducing the compute, communication and storage requirements. Quantization to lower precision formats naturally poses a number of challenges caused by the limited range of the available value representations. When it comes to processing the popular Transformer models on hardware, one of the main issues becomes calculation of the LayerNorm simply because accumulation of the variance requires a much wider dynamic range than the hardware enables. In this article, we address this matter and propose a computationally-efficient scaling technique that can be easily applied to Transformer models during inference. Our method suggests a straightforward way of scaling the LayerNorm inputs based on the static weights of the immediately preceding linear layers. The scaling factors are computed offline, based solely on the linear layer weights, hence no latency or computational overhead is added during inference. Most importantly, our technique ensures that no numerical issues such as overflow or underflow could happen during the compute. This approach offers smooth, accurate and resource-effective inference across a wide range of hardware architectures. The article provides theoretical justification as well as supporting numerical simulations.
Submission Number: 16
Loading