Abstract: Existing deep hashing methods mainly focus on preserving pairwise image similarity or reducing quantization error, often overlooking the discriminative capacity of real-valued features learned by neural networks, which limits retrieval performance. To address these issues, we propose a dual-stream Collaborative Asymmetric Similarity-preserving Hashing (CASpH) deep hashing method that preserves semantic structure across categories while generating discriminative hash codes. Specifically, the Cross Attention Feature Enhancement Block (CAFEB) is designed to mitigate information loss from feature dimensionality reduction during extraction. Furthermore, two asymmetric deep networks are constructed to capture image similarity based on semantic labels. To ensure binary codes in Hamming space retain semantic similarity from the original space, an asymmetric loss is introduced to capture the similarity between binary codes and real-valued features. This asymmetric loss not only enhances retrieval performance but also aids in faster convergence during training. Extensive experiments on three benchmark datasets demonstrate that the CASpH method outperforms other comparative methods.
External IDs:dblp:conf/cscwd/WangGJ25
Loading