Knowledge Distillation Using Output Errors for Self-attention End-to-end Models

Ho-Gyeong Kim, Hwidong Na, Hoshik Lee, Jihyun Lee, Tae Gyoon Kang, Min-Joong Lee, Young Sang Choi

Published: 2019, Last Modified: 24 Mar 2026ICASSP 2019EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Most automatic speech recognition (ASR) neural network models are not suitable for mobile devices due to their large model sizes. Therefore, it is required to reduce the model size to meet the limited hardware resources. In this study, we investigate sequence-level knowledge distillation techniques of self-attention ASR models for model compression. In order to overcome the performance degradation of compressed models, our proposed method adds an exponential weight to the sequence-level knowledge distillation loss function, which reflects the word error rate of the output of the teacher model based on the ground-truth word sequences. Evaluated on LibriSpeech dataset, the proposed knowledge distillation method achieves significant improvements over the student baseline model.
Loading