Keywords: Disentanglement, Causality, attribute-guided image generation
TL;DR: We propose a causality-aware disentanglement method for attribute-guided image generation
Abstract: Controllable image generation is a fundamental problem in machine learning and computer vision. Attribute-guided generative models enable explicit control over image content using labeled attributes, but often struggle to disentangle individual attributes and mitigate unwanted correlations—for example, adding eyeglasses may inadvertently alter a person’s perceived age.
In this work, we propose a novel attribute-guided generative framework designed to address these challenges. Our method learns a mask-based representation for each attribute label, encouraging disentanglement by limiting each attribute’s influence to a small subset of the representation dimensions, while still preserving the information necessary to represent the label. To address attribute correlations, we incorporate classical causal discovery techniques to model inter-attribute dependencies and introduce a causal conditioning strategy that explicitly reduces undesirable correlations. Importantly, we provide theoretical guarantees showing that our method can recover the latent generative factors associated with individual attributes. Extensive experiments on diverse datasets demonstrate that our framework substantially improves attribute-level controllability and interpretability, outperforming existing baselines on attribute-guided image generation tasks.
Supplementary Material: zip
Primary Area: generative models
Submission Number: 13743
Loading