Self-Supervised Pretraining for Deep Hash-Based Image Retrieval

Published: 01 Jan 2022, Last Modified: 13 May 2025ICIP 2022EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Deep hashing aims to produce discriminative binary hash codes for fast image retrieval through a deep baseline network and additional trainable hash function. In a supervised deep hashing network, the baseline network is generally initialized with classification-based pretrained models, and the overall hashing network is trained in a supervised fashion. However, since classification and retrieval are two different tasks, it is necessary to reconsider the initial model for the baseline network. In this paper, we propose to use a self-supervised pretrained model as the baseline for the first time. We investigate the impact of pretrained model types by comparing deep hashing networks that use the baseline network with 1) randomly initialized weights, 2) conventional supervised pretrained weights, and 3) proposed self-supervised pretrained weights. As a result, we confirm that the performance of deep hashing differs depending on the initial baseline setting, and the proposed self-supervised baseline model shows comparable or better performance over the supervised one. Our code is released at https://github.com/HaeyoonYang/SSPH.
Loading