An eye for an ear: zero-shot audio description leveraging an image captioner with audio-visual token distribution matching
Keywords: Multimodal representation learning, Audio Captioning, Image Captioning, Audio-Visual, Large Language Model
TL;DR: We propose a novel method for aligning audio and image tokens to enable zero-shot audio captioning throught MMD and Optimal Transport leveraging a large vision language models, achieving superior performance in unsupervised settings.
Abstract: Multimodal large language models have fueled progress in image captioning. These models, fine-tuned on vast image datasets, exhibit a deep understanding of semantic concepts.
In this work, we show that this ability can be re-purposed for audio captioning, where the joint image-language decoder can be leveraged to describe auditory content associated with image sequences within videos featuring audiovisual content. This can be achieved via multimodal alignment.
Yet, this multimodal alignment task is non-trivial due to the inherent disparity between audible and visible elements in real-world videos. Moreover, multimodal representation learning often relies on contrastive learning, facing the challenge of the so-called modality gap which hinders smooth integration between modalities. In this work, we introduce a novel methodology for bridging the audiovisual modality gap by matching the distributions of tokens produced by an audio backbone and those of an image captioner. Our approach aligns the audio token distribution with that of the image tokens, enabling the model to perform zero-shot audio captioning in an unsupervised fashion. This alignment allows for the use of either audio or audiovisual input by combining or substituting the image encoder with the aligned audio encoder. Our method achieves significantly improved performances in zero-shot audio captioning, compared to existing approaches.
Primary Area: Speech and audio
Submission Number: 19776
Loading