Attacking Perceptual Similarity MetricsDownload PDF

29 Sept 2021 (modified: 13 Feb 2023)ICLR 2022 Conference Withdrawn SubmissionReaders: Everyone
Keywords: perceptual similarity metrics, computer vision, adversarial robustness, image quality assessment, transferable adversarial examples
Abstract: Perceptual similarity metrics have progressively become more correlated with human judgments on perceptual similarity; however, despite recent advances, the addition of an imperceptible distortion can still compromise these metrics. To the best of our knowledge, no study to date has systematically examined the robustness of these metrics to imperceptible adversarial perturbations. Following the two-alternative forced choice experimental design with two distorted images, and one reference image, we perturb the distorted image closer to the reference via an adversarial attack until the metric flips its judgment. We first show that all metrics are susceptible to perturbations generated via common adversarial attacks such as FGSM, PGD, and the One-pixel attack. Next, we attack the widely adopted LPIPS metric using FlowAdv, our flow-based spatial attack, in a white-box setting to craft adversarial examples that can effectively transfer to other similarity metrics in a black-box setting. In addition, we combine the spatial attack FlowAdv with PGD ($l_\infty$-bounded) attack, to increase transferability and use these adversarial examples to benchmark the robustness of both traditional and recently developed metrics. Our benchmark provides a good starting point for discussion and further research on the robustness of metrics to imperceptible adversarial perturbations.
One-sentence Summary: We test the robustness of perceptual similarity metrics to imperceptible adversarial perturbations
6 Replies

Loading