Abstract: Due to the powerful feature extraction capabilities of deep neural networks (DNNs), deep image hashing has extensive applications in the fields such as image authentication, copy detection and content retrieval, making its security a critical concern. Among various security metrics, collision resistance serves as a crucial indicator of deep image hashing methods. Research on collision attacks not only reveals the potential vulnerabilities of deep image hashing but also can promote the development of more robust and secure hashing methods. In this paper, we propose a novel generative collision attack scheme, which achieves several advantages over existing attack schemes based on adversarial examples. Our scheme requires no additional perturbations added to the image, and can simultaneously generate multiple hash collision images of different classes specified by the attacker. To the best of our knowledge, this is the first generative collision attack scheme effective across various deep image hashing methods. Specifically, our attack framework consists of three parts, i.e., a Hash-to-Noise Network (HTNN), a pretrained BigGAN generator and a conditional discriminator. The designed HTNN embeds the hash code of the target image and the attacker-specified generation class information into a “noise” vector. By optimizing various hash distance loss functions between the generated and target images, this “noise” guides the generator to directly generate images that meet the collision requirement. At the same time, the discriminator ensures that the generated images are visually realistic. Extensive experimental results verify that our scheme can effectively generate multiple high-quality images with attacker-specified classes, achieving the high success rate of hash collision attack and the applicability across state-of-the-art deep hashing methods.
Loading