Less is More: Task-aware Layer-wise Distillation for Language Model CompressionDownload PDFOpen Website

Published: 01 Jan 2023, Last Modified: 29 Sept 2023ICML 2023Readers: Everyone
Abstract: Layer-wise distillation is a powerful tool to compress large models (i.e. teacher models) into small ones (i.e., student models). The student distills knowledge from the teacher by mimicking the hi...
0 Replies

Loading