LACONIC: A 3D Layout Adapter for Controllable Image Creation

Published: 20 Aug 2025, Last Modified: 26 Aug 2025SP4V DualSubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Layout-guided Image Synthesis, 3D-Aware Generation, Controllable Diffusion Models, Adapter Networks
TL;DR: We introduce LACONIC, a novel adapter network that enables fine-grained 3D layout control in text-to-image generative models. Accepted to ICCV 2025.
Abstract: Existing generative approaches for guided image synthesis of multi-object scenes typically rely on 2D controls in the image or text space. As a result, these methods struggle to maintain and respect consistent three-dimensional geometric structure, underlying the scene. In this paper, we propose a novel conditioning approach, training method and adapter network that can be plugged into pretrained text-to-image diffusion models. Our approach provides a way to endow such models with 3D-awareness, while leveraging their rich prior knowledge. Our method supports camera control, conditioning on explicit 3D geometries and, for the first time, accounts for the entire context of a scene, i.e., both on and off-screen items, to synthesize plausible and semantically rich images. Despite its multi-modal nature, our model is lightweight, requires a reasonable number of data for supervised learning and shows remarkable generalization power. We also introduce methods for intuitive and consistent image editing and restyling, e.g., by positioning, rotating or resizing individual objects in a scene. Our method integrates well within various image creation workflows and enables a richer set of applications compared to previous approaches.
Supplementary Material: pdf
Submission Number: 23
Loading