Abstract: Image hashing techniques that map images into a set of hash codes are widely used in many image-related tasks. A recent trend is the deep supervised hashing methods that leverage the annotated similarity of images measured point-wise, pairwise, triplet-wise, or list-wise. Among these methods, central similarity quantization (CSQ) introduces a state-of-the-art point-wise metric called global similarity, which encourages aggregation of similar data points to a common centroid and dissimilar ones to different centroids. However, it sometimes will fail and lead to several data points drifting away from their corresponding hash centers during training, especially for multi-labeled data. In this study, we propose a novel image hashing method incorporating pair-wise similarity into central similarity quantization, which enables it to capture the global similarity of image data and pay attention to drift points simultaneously. To this end, we present a novel learning objective based on the weighted partial-softmax loss and implement it with a deep hashing model. Extensive experiments are conducted on publicly available datasets, demonstrating that the proposed method has achieved performance gains over the competitors.
Loading