Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
Do Deep Convolutional Nets Really Need to be Deep (Or Even Convolutional)?
Gregor Urban, Krzysztof J. Geras, Samira Ebrahimi Kahou, Ozlem Aslan Shengjie Wang, Rich Caruana, Abdelrahman Mohamed, Matthai Philipose, Matt Richardson
Feb 18, 2016 (modified: Feb 18, 2016)ICLR 2016 workshop submissionreaders: everyone
Abstract:Yes, apparently they do.
Previous research by Ba and Caruana (2014)
demonstrated that shallow feed-forward nets
sometimes can learn the complex functions pre-
viously learned by deep nets while using a simi-
lar number of parameters as the deep models they
mimic. In this paper we investigate if shallow
models can learn to mimic the functions learned
by deep convolutional models. We experiment
with shallow models and models with a vary-
ing number of convolutional layers, all trained to
mimic a state-of-the-art ensemble of CIFAR-10
models. We demonstrate that we are unable to
train shallow models to be of comparable accu-
racy to deep convolutional models. Although the
student models do not have to be as deep as the
teacher models they mimic, the student models
apparently need multiple convolutional layers to
learn functions of comparable accuracy.