Addressing Model Vulnerability to Distributional Shifts over Image Transformation SetsDownload PDF

16 Feb 2020 (modified: 16 Feb 2020)OpenReview Archive Direct UploadReaders: Everyone
Abstract: We are concerned with the vulnerability of computer vision models to distributional shifts. We formulate a combinatorial optimization problem that allows evaluating the regions in the image space where a given model is more vulnerable, in terms of image transformations applied to the input, and face it with standard search algorithms. We further embed this idea in a training procedure, where we define new data augmentation rules according to the image transformations that the current model is most vulnerable to, over iterations. An empirical evaluation on classification and semantic segmentation problems suggests that the devised algorithm allows to train models that are more robust against content-preserving image manipulations and, in general, against distributional shifts.
0 Replies

Loading