AntMan: Sparse Low-Rank Compression To Accelerate RNN InferenceDownload PDF

27 Sept 2018 (modified: 13 Jul 2023)ICLR 2019 Conference Blind SubmissionReaders: Everyone
Abstract: Wide adoption of complex RNN based models is hindered by their inference performance, cost and memory requirements. To address this issue, we develop AntMan, combining structured sparsity with low-rank decomposition synergistically, to reduce model computation, size and execution time of RNNs while attaining desired accuracy. AntMan extends knowledge distillation based training to learn the compressed models efficiently. Our evaluation shows that AntMan offers up to 100x computation reduction with less than 1pt accuracy drop for language and machine reading comprehension models. Our evaluation also shows that for a given accuracy target, AntMan produces 5x smaller models than the state-of-art. Lastly, we show that AntMan offers super-linear speed gains compared to theoretical speedup, demonstrating its practical value on commodity hardware.
Keywords: model compression, RNN, perforamnce optimization, langugage model, machine reading comprehension, knowledge distillation, teacher-student
TL;DR: Reducing computation and memory complexity of RNN models by up to 100x using sparse low-rank compression modules, trained via knowledge distillation.
11 Replies

Loading