ShakeDrop regularization

Anonymous

Nov 07, 2017 (modified: Nov 07, 2017) ICLR 2018 Conference Blind Submission readers: everyone Show Bibtex
  • Abstract: This paper proposes a powerful regularization method named ShakeDrop regularization. ShakeDrop is inspired by Shake-Shake regularization that decreases error rates by disturbing learning. Important and interesting feature of ShakeDrop is that it strongly disturbs learning by multiplying even a negative factor to the output of a convolutional layer in the forward training pass. In addition, in the backward training pass, a different factor from the forward pass is multiplied. As a byproduct, however, learning process gets unstable. Hence, the learning process is stabilized by employing the essence of the Stochastic Depth (ResDrop). Combined with existing techniques (longer training and image preprocessing), ShakeDrop achieved error rates of 2.31\% (better by 0.25\%) on the CIFAR-10 dataset and 12.19\% (better by 2.85\%) on the CIFAR-100 dataset.

Loading