Do Deep Convolutional Nets Really Need to be Deep (Or Even Convolutional)?Download PDF

24 Apr 2024 (modified: 18 Feb 2016)ICLR 2016 workshop submissionReaders: Everyone
Abstract: Yes, apparently they do. Previous research by Ba and Caruana (2014) demonstrated that shallow feed-forward nets sometimes can learn the complex functions pre- viously learned by deep nets while using a simi- lar number of parameters as the deep models they mimic. In this paper we investigate if shallow models can learn to mimic the functions learned by deep convolutional models. We experiment with shallow models and models with a vary- ing number of convolutional layers, all trained to mimic a state-of-the-art ensemble of CIFAR-10 models. We demonstrate that we are unable to train shallow models to be of comparable accu- racy to deep convolutional models. Although the student models do not have to be as deep as the teacher models they mimic, the student models apparently need multiple convolutional layers to learn functions of comparable accuracy.
Conflicts: uci.edu, uni-heidelberg.de, polymtl.ca, cs.ualberta.ca, cs.washington.edu, microsoft.com
3 Replies

Loading