Lost in Translation: Modern Image Classifiers still degrade even under simple TranslationsDownload PDF

Published: 12 Jul 2022, Last Modified: 05 May 2023Shift Happens 2022 PosterReaders: Everyone
Abstract: Modern image classifiers are used potentially in safety-critical applications and thus should not be vulnerable to natural transformations of the image as it can happen due to variations in the image acquisition. While it is known that image classifiers can degrade significantly in performance with respect to translations and rotations, the corresponding works did not ensure that the object of interest is fully contained in the image and also introduce boundary artefacts so that the input is not a natural image. In this paper we leverage pixelwise segmentations of the ImageNet-S dataset in order to search for the translation and rotation which ensures that the object is i) fully contained in the image (potentially together with a zoom) and ii) the image is natural (no padding with black pixels) such that the resulting natural image is misclassified. We observe a consistent drop in accuracy over a large set of image classifiers showing that natural adversarial changes are an important threat model which deserves more attention.
Submission Type: Full submission (technical report + code/data)
Supplement: zip
Co Submission: No I am not submitting to the dataset and benchmark track and will complete my submission by June 3.
0 Replies

Loading