Keywords: image generation, generative model, multimodal chain of thought
Abstract: Unified generative models have shown remarkable performance in both text and image generation. When faced with image synthesis tasks, they adopt straightforward text-to-image (T2I) generation. However, we find that direct T2I generation limits unified generative models in handling complex compositional instructions. Such instructions frequently occur in realistic application scenarios. Although this is a vital issue, existing works predominantly focus on improving the basic image generation capability of unified generative models. While improvements in basic image generation can contribute to complex image generation to some extent, they still fail to adequately resolve the problem. Inspired by Chain of Thought (CoT) solving complex problems in a step-by-step manner, this work aims to introduce CoT into unified generative models to address the challenges of complex image generation that direct T2I generation cannot effectively solve, thereby endowing models with enhanced image generation ability. To achieve this, we first introduce Functionality-oriented eXperts (FoXperts), an expert-parallel architecture in our model FoX, which assigns experts based on function. In this way, FoXperts disentangles the potential conflicts in current mainstream modality-oriented designs and provide a sound foundation for CoT. When introducing CoT, the first question is how to design a CoT approach specifically for complex image generation. To this end, we emulate a human-like artistic workflow---planning, acting, reflection, and correction---and propose the Multimodal Chain of Thought (MCoT) approach, since the data here involves multiple modalities (text and image). In response to the subsequent challenge---how to design an effective MCoT training paradigm---we develop a multi-task joint training paradigm that equips the model with all capabilities required for each MCoT step in a disentangled manner. This paradigm overcomes the difficulty and impracticality of collecting consistent multi-step data tuples for training. Extensive experiments demonstrate that FoX consistently outperforms existing unified models on various T2I benchmarks, delivering notable quantitative improvements in complex image generation.
Primary Area: generative models
Submission Number: 5840
Loading