ALPS: Adaptive Quantization of Deep Neural Networks With GeneraLized PositSDownload PDFOpen Website

Published: 01 Jan 2021, Last Modified: 28 Apr 2023CVPR Workshops 2021Readers: Everyone
Abstract: In this paper, a new adaptive quantization algorithm for generalized posit format is presented, to optimally represent the dynamic range and distribution of deep neural network parameters. Adaptation is achieved by minimizing the intra-layer posit quantization error with a compander. The efficacy of the proposed quantization algorithm is studied within a new low-precision framework, ALPS, on ResNet-50 and EfficientNet models for classification tasks. Results assert that the accuracy and energy dissipation of low-precision DNNs using generalized posits outperform other well-known numerical formats, including standard posits.
0 Replies

Loading