On Compressing U-net Using Knowledge DistillationDownload PDF

19 Oct 2018 (modified: 05 May 2023)NIPS 2018 Workshop CDNNRIA Blind SubmissionReaders: Everyone
Abstract: We study the use of knowledge distillation to compress the U-net architecture. We show that, while standard distillation is not sufficient to reliably train a compressed U-net, introducing other regularization methods, such as batch normalization and class re-weighting, in knowledge distillation significantly improves the training process. This allows us to compress a U-net by over 1000x, i.e., to 0.1% of its original number of parameters, at a negligible decrease in performance.
TL;DR: We present additional techniques to use knowledge distillation to compress U-net by over 1000x.
Keywords: u-net, compression, knowledge distillation, biomedical segmentation
6 Replies

Loading