Keywords: Learned Image Compression, Adversarial Attack, Invisible, Reconstruction Quality
TL;DR: We introduce an imperceptible adversarial attack approach designed to effectively degrade the reconstruction quality of learned image compression.
Abstract: Learned Image Compression (LIC) has recently become the trending technique for image transmission due to its notable performance. Despite its popularity, the robustness of LIC with respect to the quality of image reconstruction remains under-explored. In this paper, we introduce an imperceptible adversarial attack approach designed to effectively degrade the reconstruction quality of LIC, resulting in the reconstructed image being severely disrupted by noise where identifying any object in the reconstructed image is virtually impossible. More specifically, we generate adversarial examples by introducing a Frobenius norm-based loss function to maximize the discrepancy between original images and reconstructed images from adversarial examples. Further, leveraging the human vision's insensitivity to high-frequency components, we introduce Imperceptibility Constraint (IC) to ensure that the perturbations remain inconspicuous. Experiment results on the Kodak dataset with various LIC models demonstrate the effectiveness of our method. In addition, we provide several findings and suggestions for designing future defenses.
Submission Number: 35
Loading