PositNN: Tapered Precision Deep Learning Inference for the EdgeDownload PDF

20 Oct 2018 (modified: 05 May 2023)NIPS 2018 Workshop CDNNRIA Blind SubmissionReaders: Everyone
Abstract: The performance of neural networks, especially the currently popular form of deep neural networks, is often limited by the underlying hardware. Computations in deep neural networks are expensive, have large memory footprint, and are power hungry. Conventional reduced-precision numerical formats, such as fixed-point and floating point, cannot accurately represent deep neural network parameters with a nonlinear distribution and small dynamic range. Recently proposed posit numerical format with tapered precision represents small values more accurately than the other formats. In this work, we propose an ultra-low precision deep neural network, PositNN, that uses posits during inference. The efficacy of PositNN is demonstrated on a deep neural network architecture with two datasets (MNIST, Fashion MNIST and Cifar-10), where an 8-bit PositNN outperforms other {5-8}-bit low-precision neural networks and a 32-bit floating point baseline network.
Keywords: Deep neural network, Low precision arithmetic, Posit number system
6 Replies

Loading