Learning to Place Objects into Scenes by Hallucinating Scenes around Objects

Published: 30 Oct 2023, Last Modified: 30 Nov 2023SyntheticData4ML 2023 PosterEveryoneRevisionsBibTeX
Keywords: object placement, synthetic data, image diffusion
Abstract: The ability to modify images to add new objects into a scene stands to be a powerful image editing control. However, object insertion is not robustly supported by existing diffusion-based image editing methods. The central challenge is predicting where an object should go in a scene, given only an image of the scene. To address this challenge, we propose DreamPlace, a two-step method that inserts objects of a given class into images by 1) predicting where the object is likely to go in the image and 2) inpainting the object at this location. We train our object placement model solely using synthetic data, leveraging diffusion-based image outpainting to hallucinate novel images of scenes surrounding a given object. DreamPlace, using its learned placement model, can produce qualitatively more realistic object insertion edits than comparable diffusion-based baselines. Moreover, for a limited set of object categories where benchmark annotations exist, our learned object placement model, despite being trained entirely on generated data, makes up to 35% more accurate object placements than the state-of-the-art supervised method trained on a large, manually annotated dataset (>80k annotated samples).
Submission Number: 37