Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
Knowledge distillation using unlabeled mismatched images
Mandar Kulkarni, Kalpesh Patil, Shirish Karande
Feb 17, 2017 (modified: Mar 12, 2017)ICLR 2017 workshop submissionreaders: everyone
Abstract:Current approaches for Knowledge Distillation (KD) either directly use training data or sample from the training data distribution. In this paper, we demonstrate effectiveness of 'mismatched' unlabeled stimulus to perform KD for image classification networks. For illustration, we consider scenarios where this is a complete absence of training data, or mismatched stimulus has to be used for augmenting a small amount of training data. We demonstrate that stimulus complexity is a key factor for distillation's good performance. Our examples include use of various datasets for stimulating MNIST and CIFAR teachers.
TL;DR:Distilling knowledge from neural networks under the assumption that the training data is not available.
Keywords:Deep learning, Transfer Learning
Enter your feedback below and we'll get back to you as soon as possible.