Object-Centric Slot Diffusion

Published: 21 Sept 2023, Last Modified: 24 Dec 2023NeurIPS 2023 spotlightEveryoneRevisionsBibTeX
Keywords: Object-Centric Representation Learning, Diffusion Models, Unsupervised Representation Learning
TL;DR: We propose the Latent Slot Diffusion which combines an object-centric encoder with diffusion decoders. It achieves unsupervised learning of object segmentation, compositional generation, and image editing, surpassing the state-of-the-art models.
Abstract: The recent success of transformer-based image generative models in object-centric learning highlights the importance of powerful image generators for handling complex scenes. However, despite the high expressiveness of diffusion models in image generation, their integration into object-centric learning remains largely unexplored in this domain. In this paper, we explore the feasibility and potential of integrating diffusion models into object-centric learning and investigate the pros and cons of this approach. We introduce Latent Slot Diffusion (LSD), a novel model that serves dual purposes: it is the first object-centric learning model to replace conventional slot decoders with a latent diffusion model conditioned on object slots, and it is also the first unsupervised compositional conditional diffusion model that operates without the need for supervised annotations like text. Through experiments on various object-centric tasks, including the first application of the FFHQ dataset in this field, we demonstrate that LSD significantly outperforms state-of-the-art transformer-based decoders, particularly in more complex scenes, and exhibits superior unsupervised compositional generation quality. In addition, we conduct a preliminary investigation into the integration of pre-trained diffusion models in LSD and demonstrate its effectiveness in real-world image segmentation and generation. Project page is available at https://latentslotdiffusion.github.io
Submission Number: 10448
Loading