Challenges of Adversarial Image AugmentationsDownload PDF

Published: 18 Oct 2021, Last Modified: 05 May 2023ICBINB@NeurIPS2021 SpotlightReaders: Everyone
Keywords: training augmentations, image augmentations, adversarial augmentations
TL;DR: Finding an image augmentation policy that is better than random sampling is hard. Being truly adversarial (within reasonable bounds) is detrimental. The success of using adversarial augmentations relies on being suboptimal in the adversarial sense.
Abstract: Image augmentations applied during training are crucial for the generalization performance of image classifiers. Therefore, a large body of research has focused on finding the optimal augmentation policy for a given task. Yet, RandAugment \cite{cubuk2020randaugment}, a simple random augmentation policy, has recently been shown to outperform existing sophisticated policies. Only Adversarial AutoAugment (AdvAA) \cite{zhang2019adversarial}, an approach based on the idea of adversarial training, has shown to be better than RandAugment. In this paper, we show that random augmentations are still competitive compared to an optimal adversarial approach, as well as to simple curricula, and conjecture that the success of AdvAA is due to the stochasticity of the policy controller network, which introduces a mild form of curriculum.
Category: Negative result: I would like to share my insights and negative results on this topic with the community
1 Reply

Loading