Abstract: Cross-modal hashing has garnered considerable attention and gained great success in many cross-media similarity search applications due to its prominent computational efficiency and low storage overhead. However, it still remains challenging how to effectively take multilevel advantages of semantics on the <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">entire</i> database to jointly bridge the semantic and heterogeneity gaps across different modalities. In this paper, we propose a novel Modality-Invariant Asymmetric Networks (MIAN) architecture, which explores the asymmetric intra- and inter-modal similarity preservation under a probabilistic modality alignment framework. Specifically, an intra-modal asymmetric network is conceived to capture the query-vs-all internal pairwise similarities for each modality in a probabilistic asymmetric learning manner. Moreover, an inter-modal asymmetric network is deployed to fully harness the cross-modal semantic similarities supported by the maximum inner product search formula between two distinct hash embeddings. Particularly, the pairwise, piecewise and transformed semantics are jointly considered into one unified semantic-preserving hash codes learning scheme. Furthermore, we construct a modality alignment network to distill the redundancy-free visual features and maximize the conditional bottleneck information between different modalities. Such a network could close the heterogeneity and domain shift across different modalities and enable it to yield discriminative modality-invariant hash codes. Extensive experiments evidence that our MIAN approach can outperform the state-of-the-art cross-modal hashing methods.
0 Replies
Loading