Abstract: In this paper, we introduce an imperceptible adversarial attack approach designed to effectively degrade the reconstruction quality of LIC, resulting in the reconstructed image being severely disrupted by noise where identifying any object in the reconstructed image is virtually impossible. More specifically, we generate adversarial examples by introducing a Frobenius norm-based loss function to maximize the discrepancy between original images and reconstructed images from adversarial examples in order to corrupt the reconstructed image severely.
Loading