ESEAD: An Enhanced Simple Ensemble and Distillation Framework for Natural Language ProcessingDownload PDF

22 Sept 2022 (modified: 13 Feb 2023)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Natural Language Processing, Knowledge Distillation
TL;DR: A simple yet effective logits-based distillation method for natural language processing.
Abstract: Large-scale pre-trained language models (PLM) are today’s leading technology for a wide range of natural language processing tasks. However, the enormous size of these models may discourage their use in practice. To tackle this problem, some recent studies have used knowledge distillation (KD) to compress these large models into shallow ones. Despite the success of the knowledge distillation, it remains unclear how students learn. We extend knowledge distillation in this paper and propose an enhanced version of the logits-based distillation method, ESEAD, to utilize the knowledge of multiple teachers to assist student learning. In extensive experiments with total 13 tasks on the GLUE and SuperGLUE benchmarks, ESEAD with different fine-tuning paradigms (e.g., delta tuning) obtained superior results over other KD methods and even outperformed the teacher model on some tasks. In addition, ESEAD remained the best performing student model in the few-shot (e.g., 100 samples) settings.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Applications (eg, speech processing, computer vision, NLP)
5 Replies

Loading