Analyzing Multimodal Objectives Through the Lens of Generative Diffusion GuidanceDownload PDF

Published: 06 Mar 2023, Last Modified: 17 Nov 2024MRL 2023 SpotlightReaders: Everyone
Keywords: Vision-Language, Diffusion, Generative Models
TL;DR: We analyze the properties of widely used multimodal objectives by studying the generative guidance they provide.
Abstract: Recent years have witnessed astonishing advances in the field of multimodal representation learning, with contrastive learning being the cornerstone for major breakthroughs. Latest works delivered further improvements by incorporating different objectives such as masked modeling and captioning into the frameworks, but our understanding on how these objectives facilitate learning remains vastly incomplete. In this paper, we leverage the fact that classifier-guided diffusion models generate images that reflect the semantic signals provided by the classifier to study the characteristics of multimodal learning objectives. Specifically, we compare contrastive, matching and captioning loss in terms of their semantic signals, and introduce a simple baseline that not only supports our analyses but also improves the quality of generative guidance in a straightforward manner.
Supplementary Material: zip
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 2 code implementations](https://www.catalyzex.com/paper/analyzing-multimodal-objectives-through-the/code)
0 Replies

Loading