Adversarial Attack and Defense in Deep Ranking
Abstract: Deep Neural Network classifiers are vulnerable to adversarial attack, where an imperceptible perturbation could result in
misclassification. However, the vulnerability of DNN-based image ranking systems remains under-explored. In this paper, we propose two
attacks against deep ranking systems, i.e., Candidate Attack and Query Attack, that can raise or lower the rank of chosen candidates by
adversarial perturbations. Specifically, the expected ranking order is first represented as a set of inequalities, and then a triplet-like
objective function is designed to obtain the optimal perturbation. Conversely, an anti-collapse triplet defense is proposed to improve the
ranking model robustness against all proposed attacks, where the model learns to prevent the positive and negative samples being pulled
close to each other by adversarial attack. To comprehensively measure the empirical adversarial robustness of a ranking model with our
defense, we propose an empirical robustness score, which involves a set of representative attacks against ranking models. Our
adversarial ranking attacks and defenses are evaluated on MNIST, Fashion-MNIST, CUB200-2011, CARS196 and Stanford Online
Products datasets. Experimental results demonstrate that a typical deep ranking system can be effectively compromised by our attacks.
Nevertheless, our defense can significantly improve the ranking system robustness, and simultaneously mitigate a wide range of attacks
0 Replies
Loading