Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
Yoshihiro Yamada, Masakazu Iwamura, Koichi Kise
Feb 12, 2018 (modified: Feb 20, 2018)ICLR 2018 Workshop Submissionreaders: everyoneShow Bibtex
Abstract:This paper proposes a powerful regularization method named ShakeDrop regularization.
ShakeDrop is inspired by Shake-Shake regularization that decreases error rates by disturbing learning.
While Shake-Shake can be applied to only ResNeXt which has multiple branches, ShakeDrop can be applied to not only ResNeXt but also ResNet, and PyramidNet in a memory efficient way.
Important and interesting feature of ShakeDrop is that it strongly disturbs learning by multiplying even a negative factor to the output of a convolutional layer in the forward training pass.
ShakeDrop outperformed state-of-the-arts on CIFAR-10/100.
The full version of the paper including other experiments is available at https://arxiv.org/abs/1802.02375.
Enter your feedback below and we'll get back to you as soon as possible.