Multi-stage Distillation Framework for Cross-Lingual Semantic Similarity MatchingDownload PDF

Anonymous

08 Mar 2022 (modified: 05 May 2023)NAACL 2022 Conference Blind SubmissionReaders: Everyone
Paper Link: https://openreview.net/forum?id=O1VX99FS0ER
Paper Type: Long paper (up to eight pages of content + unlimited references and appendices)
Abstract: Previous studies have proved that cross-lingual knowledge distillation can significantly improve the performance of pre-trained models for cross-lingual similarity matching tasks. However, the student model needs to be large in this operation. Otherwise, its performance will drop sharply, thus making it impractical to be deployed to memory-limited devices. To address this issue, we delve into cross-lingual knowledge distillation and propose a multi-stage distillation framework for constructing a small-size but high-performance cross-lingual model. In our framework, contrastive learning, bottleneck, and parameter recurrent strategies are delicately combined to prevent performance from being compromised during the compression process. The experimental results demonstrate that our method can compress the size of XLM-R and MiniLM by more than 50%, while the performance is only reduced by about 1%.
Presentation Mode: This paper will be presented virtually
Virtual Presentation Timezone: UTC+8
Copyright Consent Signature (type Name Or NA If Not Transferrable): Kunbo Ding
Copyright Consent Name And Address: Peking University, No.5 Yiheyuan Road,Haidian District,Beijing,P.R.China
0 Replies

Loading