Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
Training Shallow and Thin Networks for Acceleration via Knowledge Distillation with Conditional Adversarial Networks
Zheng Xu, Yen-Chang Hsu, Jiawei Huang
Feb 12, 2018 (modified: Feb 12, 2018)ICLR 2018 Workshop Submissionreaders: everyone
Abstract:There is an increasing interest on accelerating neural networks for real-time applications. We study the student-teacher strategy, in which a small and fast student network is trained with the auxiliary information learned from a large and accurate teacher network. We propose to use conditional adversarial networks to learn the loss function to transfer knowledge from teacher to student. The experiments on three different image datasets show the student network gain a performance boost with proposed training strategy.
TL;DR:Adversarial training for transferring knowledge from teacher network to student network
Enter your feedback below and we'll get back to you as soon as possible.