Abstract: It is well known that constructing adversarial examples is important to make a machine learning system more stable by pointing out its vulnerabilities. We present an approach to produce the adversarial examples to image cropping systems. In this method, we propose a method to calculate repeated perturbations based on the gradient information calculated using a model similar to the cropping system provided by a social network service, and give the cropping model an attack to change regions that should be cropped. We measure the amount of shift in the peak value of the saliency map and quantitatively verify the effectiveness of the system by this amount. We demonstrate that the proposed method outperforms baseline methods.
Loading