Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
Deep Pyramidal Residual Networks with Stochastic Depth
Yoshihiro Yamada, Masakazu Iwamura, Koichi Kise
Feb 17, 2017 (modified: Feb 17, 2017)ICLR 2017 workshop submissionreaders: everyone
Abstract: In generic object recognition tasks, ResNet and its improvements have broken the lowest error rate records.
ResNet enables us to make a network deeper by introducing residual learning.
Some ResNet improvements achieve higher accuracy by focusing on channels.
Thus, the network depth and channels are thought to be important for high accuracy.
In this paper, in addition to them, we pay attention to use of multiple models in data-parallel learning. We refer to it as data-parallel multi-model learning.
We observed that the accuracy increased as models concurrently used increased on some methods, particularly on the combination of PyramidNet and the stochastic depth proposed in the paper.
As a result, we confirmed that the methods outperformed the conventional methods;
on CIFAR-100, the proposed methods achieved error rates of 16.13\% and 16.18\% in contrast to PiramidNet achieving that of 18.29\% and the current state-of-the-art DenseNet-BC 17.18\%.
Enter your feedback below and we'll get back to you as soon as possible.