Mixed-Precision Inference Quantization: Radically Towards Faster inference speed, Lower Storage requirement, and Lower LossDownload PDF

Published: 01 Feb 2023, Last Modified: 13 Feb 2023Submitted to ICLR 2023Readers: Everyone
Keywords: Mixed-Precision Quantization, Inference, Neural networks, Noise Robustness, Residual Network
TL;DR: An appropriate quantisation method may reduce the neural network's loss, and we show our way in this paper.
Abstract: Model quantization is important for compressing models and improving computing speed. However, current researchers think that the loss function of quantizated model is usually higher than full precision model. This study provides a methodology for acquiring a mixed-precise quantization model with a lower loss than the full precision model. In addition, the analysis demonstrates that, throughout the inference process, the loss function is mostly affected by the noise of the layer inputs. In particular, we will demonstrate that neural networks with massive identity mappings are resistant to the quantization method. It is also difficult to improve the performance of these networks using quantization.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: General Machine Learning (ie none of the above)
5 Replies

Loading