TENT: Efficient Quantization of Neural Networks on the tiny Edge with Tapered FixEd PoiNTDownload PDF

Published: 07 Feb 2021, Last Modified: 05 May 2023tinyML 2021 PosterReaders: Everyone
Keywords: deep neural networks, low-precision arithmetic, tapered fixed-point
Abstract: In this research, we propose a new low-precision framework, TENT, to leverage the benefits of a tapered fixed-point numerical format in TinyML models. We introduce a tapered fixed-point quantization algorithm that matches the numerical format's dynamic range and distribution to that of the deep neural network model's parameter distribution at each layer. An accelerator architecture for the tapered fixed-point with TENT framework is proposed. Results show that the accuracy on classification tasks improves up to $\approx$ 31 % with an energy overhead of $\approx$17-30 % as compared to fixed-point, for ConvNet and ResNet-18 models.
5 Replies

Loading