Treating Neural Image Compression via Modular Adversarial Optimization: From Global Distortion to Local Artifacts

ICLR 2026 Conference Submission25459 Authors

20 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Adversarial Robustness, Neural Image Compression, Adversarial Attacks
TL;DR: We propose a modular adversarial attack on neural image codecs that reduces the compression quality of both the entire image and local areas, in order to improve the effectiveness and filters noise to stay imperceptible.
Abstract: The rapid progress in neural image compression (NIC) led to the deployment of advanced codecs, such as JPEG AI, which significantly outperform conventional approaches. However, despite extensive research on the adversarial robustness of neural networks in various computer vision tasks, the vulnerability of NIC models to adversarial attacks remains underexplored. Moreover, the existing adversarial attacks on NIC are ineffective against modern codecs. In this paper, we introduce a novel adversarial attack targeting NIC models. Our approach is built upon two core stages: (1) optimization of global-local distortions, and (2) a selective masking strategy that enhances attack stealthiness. Experimental evaluations demonstrate that the proposed method outperforms prior attacks on both JPEG AI and other NIC models, achieving greater distortion on decoded images and lower perceptibility of adversarial images. We also provide a theoretical analysis and discuss the underlying reasons for the effectiveness of our attack, offering new insights into the security and robustness of learned image compression.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 25459
Loading