Abstract: Researchers face a demanding task in the context of unsupervised image retrieval, particularly concerning the challenge of learning discriminative hash codes for the similarity of each sample. The low utilization of the similarity between samples often results in the suboptimal construction of pseudo-labels during contrastive learning, leading to difficulties in accurately differentiating between positive and negative sample pairs. To address this issue, this study proposes a novel contrastive learning method called sample-weighted contrastive hashing (SWCH). In SWCH, a similarity attention module is introduced to exploit the similarity information between samples. By leveraging this approach, a similarity matrix can be constructed, resulting in more-accurate representation of the similarity between samples. Additionally, inspired by the design concept of PolyLoss, a loss term focusing on sample similarity is designed to enhance the model’s ability to capture similarities among data points, thereby generating superior hash codes. In comprehensive experiments, the proposed method demonstrates significant performance improvements in unsupervised image retrieval tasks. Compared with existing methods, our SWCH approach excels in learning discriminative hash codes and enhances the capability of capturing similarities, ultimately achieving superior results in unsupervised image retrieval tasks.
Loading