Abstract: With the rapid growth of visual content, deep learning
to hash is gaining popularity in the image retrieval community recently. Although it greatly facilitates search efficiency, privacy is also at risks when images on the web
are retrieved at a large scale and exploited as a rich mine
of personal information. An adversary can extract private
images by querying similar images from the targeted category for any usable model. Existing methods based on
image processing preserve privacy at a sacrifice of perceptual quality. In this paper, we propose a new mechanism
based on adversarial examples to “stash” private images
in the deep hash space while maintaining perceptual similarity. We first find that a simple approach of hamming
distance maximization is not robust against brute-force adversaries. Then we develop a new loss function by maximizing the hamming distance to not only the original category, but also the centers from all the classes, partitioned
into clusters of various sizes. The extensive experiment
shows that the proposed defense can harden the attacker’s
efforts by 2-7 orders of magnitude, without significant increase of computational overhead and perceptual degradation. We also demonstrate 30-60% transferability in hash
space with a black-box setting. The code is available at:
https://github.com/sugarruy/hashstash
0 Replies
Loading