Zero-Shot Dynamic Quantization for Transformer InferenceDownload PDF

Anonymous

08 Mar 2022 (modified: 05 May 2023)NAACL 2022 Conference Blind SubmissionReaders: Everyone
Paper Link: https://openreview.net/forum?id=SqRs1Dsp-e-
Paper Type: Short paper (up to four pages of content + unlimited references and appendices)
Abstract: We introduce a novel run-time method for significantly reducing the accuracy loss associated with quantizing BERT-like models to 8-bit integers. Existing methods for quantizing models either modify the training procedure, or they require an additional calibration step to adjust parameters that also requires a selected held-out dataset.Our method permits taking advantage of quantization without the need for these adjustments.We present results on several NLP tasks demonstrating the usefulness of this technique.
0 Replies

Loading