Optimization-Based Defender via Coarse-To-Fine Tensor Network Representation

18 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Tensor Network Representation, Robustness
Abstract: Deep neural networks are vulnerable to well-designed adversarial attacks. Although numerous defense strategies have been proposed, most are tailored to specific threats or datasets and thus struggle to generalize across diverse adversarial scenarios. In this paper, we propose Tensor Network Purification (TNP), a novel optimization-based defense technique built upon a specially designed tensor network decomposition algorithm. TNP depends neither on the pre-trained generative model nor the specific dataset, enabling robust generalization. To this end, the key challenge lies in relaxing Gaussian-noise assumptions of classical decompositions and accommodating the unknown perturbation distributions. Instead of imposing consistency by traditional objectives, TNP aims to reconstruct the latent clean example from its adversarially perturbed input. Specifically, TNP leverages progressive downsampling with a new adversarial objective that minimizes reconstruction error while suppressing the inadvertent restoration of the perturbations. Extensive experiments on CIFAR-10, CIFAR-100, and ImageNet show that TNP generalizes effectively across diverse norm threats, attack types, and datasets, delivering a versatile and promising defense.
Primary Area: other topics in machine learning (i.e., none of the above)
Submission Number: 11932
Loading