Normalization Helps Training of Quantized LSTMDownload PDF

Lu Hou, Jinhua Zhu, James Tin-Yau Kwok, Fei Gao, Tao Qin, Tie-Yan Liu

06 Sept 2019 (modified: 05 May 2023)NeurIPS 2019Readers: Everyone
Abstract: The long-short-term memory (LSTM), though powerful, is memory and computation expensive. To alleviate this problem, one approach is to compress its weights by quantization. However, existing quantization methods usually have inferior performance when used on LSTMs. In this paper, we first show theoretically that training a quantized LSTM is difficult because quantization makes the exploding gradient problem more severe, particularly when the LSTM weight matrices are large. We then show that the popularly used weight/layer/batch normalization schemes can help stabilize the gradient magnitude in training quantized LSTMs. Empirical results show that the normalized quantized LSTMs achieve significantly better results than their unnormalized counterparts. Their performance is also comparable with the full-precision LSTM, while being much smaller in size. The code can be found at https://github.com/Authoraaa/code.
Code Link: https://github.com/houlu369/Normalized-Quantized-LSTM
CMT Num: 4004
0 Replies

Loading