Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
Nov 07, 2017 (modified: Nov 07, 2017)ICLR 2018 Conference Blind Submissionreaders: everyoneShow Bibtex
Abstract:This paper proposes a powerful regularization method named ShakeDrop regularization. ShakeDrop is inspired by Shake-Shake regularization that decreases error rates by disturbing learning. Important and interesting feature of ShakeDrop is that it strongly disturbs learning by multiplying
even a negative factor to the output of a convolutional layer in the forward training pass. In addition, in the backward training pass, a different factor from the forward pass is multiplied. As a byproduct, however, learning process gets unstable. Hence, the learning process is stabilized by employing the essence of the Stochastic Depth (ResDrop). Combined with existing techniques (longer training and image preprocessing), ShakeDrop achieved error rates of 2.31\% (better by 0.25\%) on the CIFAR-10 dataset and 12.19\% (better by 2.85\%) on the CIFAR-100 dataset.
Enter your feedback below and we'll get back to you as soon as possible.