A Reconstruction-Based Visual-Acoustic-Semantic Embedding Method for Speech-Image Retrieval

Published: 01 Jan 2023, Last Modified: 03 Dec 2024IEEE Trans. Multim. 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Speech-image retrieval aims at learning the relevance between image and speech.Prior approaches are mainly based on bi-modal contrastive learning, which can not alleviate the cross-modal heterogeneous issue between visual and acoustic modalities well. To address this issue, we propose a visual-acoustic-semantic embedding (VASE) method. First, we propose a tri-modal ranking loss by taking advantage of semantic information corresponding to the acoustic data, which introduces the auxiliary alignment to enhance the alignment between image and speech. Second, we introduce a cycle-consistency loss based on feature reconstruction. It can further alleviate the heterogeneous issue between different data modalities ( e.g. , visual-acoustic, visual-textual and acoustic-textual). Extensive experiments have demonstrated the effectiveness of our proposed method. In addition, our VASE model achieves state-of-the-art performance on the speech-image retrieval task on the Flickr8K [Harwath and Glass, 2015]s and Places [Harwath et al. , 2018] datasets.
Loading