Abstract: Cross-modal audio-visual correlation learning has been an active research topic, which aims to embed audio and visual feature sequences into a common subspace where their correlation is maximized. The challenge of audio-visual correlation learning lies in two major aspects: 1) audio and visual feature sequences respectively contain different patterns belonging to different feature spaces, and 2) semantic mismatch between audio and visual sequences inevitably happens during cross-modal matching. Most existing methods only take the first aspect into account, therefore facing the difficulty in distinguishing matched and mismatched semantic correlations between audio and visual sequences. In this work, an adversarial contrastive autoencoder with a shared attention network (ACASA) is proposed for correlation learning in audio-visual retrieval. In particular, the proposed shared attention mechanism is parameterized, in which local salient information is enhanced to contribute to the final feature representation. Simultaneously, adversarial contrastive learning is exploited to maximize semantic feature consistency and improve the ability to distinguish matched and mismatched samples. Both inter-modal and intra-modal semantic information are utilized to supervise the model to learn more discriminative feature representation. Extensive experiments on the VEGAS and AVE datasets demonstrate that the proposed ACASA method outperforms state-of-the-art approaches in cross-modal audio-visual retrieval.
External IDs:dblp:journals/access/ZhangYTL25
Loading