Transformer to CNN: Label-scarce distillation for efficient text classificationDownload PDF

Published: 07 Nov 2018, Last Modified: 05 May 2023NIPS 2018 Workshop CDNNRIA Blind SubmissionReaders: Everyone
Abstract: Significant advances have been made in Natural Language Processing (NLP) modelling since the beginning of 2018. The new approaches allow for accurate results, even when there is little labelled data, because these NLP models can benefit from training on both task-agnostic and task-specific unlabelled data. However, these advantages come with significant size and computational costs. This workshop paper outlines how our proposed convolutional student architecture, having been trained by a distillation process from a large-scale model, can achieve 300x inference speedup and 39x reduction in parameter count. In some cases, the student model performance surpasses its teacher on the studied tasks.
TL;DR: We train a small, efficient CNN with the same performance as the OpenAI Transformer on text classification tasks
Keywords: NLP, text classification, model distillation, model compression, efficient architecture, OpenAI Transformer, transfer learning, cnn, low data, student, teacher
6 Replies

Loading