Investigating Conceptual Blending of a Diffusion Model for Improving Nonword-to-Image Generation

Published: 20 Jul 2024, Last Modified: 21 Jul 2024MM2024 OralEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Text-to-image diffusion models sometimes depict blended concepts in generated images. One promising use case of this effect would be the nonword-to-image generation task which attempts to generate images intuitively imaginable from a non-existing word (nonword). To realize nonword-to-image generation, an existing study focused on associating nonwords with similar-sounding words. Since each nonword can have multiple similar-sounding words, generating images containing their blended concepts would increase intuitiveness, facilitating creative activities and promoting computational psycholinguistics. Nevertheless, no existing study has quantitatively evaluated this effect in either diffusion models or the nonword-to-image generation paradigm. Therefore, this paper first analyzes the conceptual blending in one of the pretrained diffusion models called Stable Diffusion. The analysis reveals that a high percentage of generated images depict blended concepts when inputting an embedding interpolating between the text embeddings of two text prompts referring to different concepts. Next, this paper explores the best text embedding space conversion method of an existing nonword-to-image generation framework to ensure both the occurrence of conceptual blending and image generation quality. We compare the conventional direct prediction approach with the proposed method that combines $k$-nearest neighbor search and linear regression. Evaluation reveals that the enhanced accuracy of the embedding space conversion by the proposed method improves the image generation quality, while the emergence of conceptual blending could be attributed mainly to the specific dimensions of the high-dimensional text embedding space.
Primary Subject Area: [Content] Vision and Language
Secondary Subject Area: [Generation] Generative Multimedia
Relevance To Conference: Recently in the multimedia processing field, methods for generating images from a given language prompt (e.g., text-to-image generation) have gained increased attention since they tackle the multimodal problem of converting language into images. Our work analyzes the phenomenon of conceptual blending in image generation results of a text-to-image generation model. Our analysis provides a quantitative view of the conceptual blending in these models, which will encourage multimedia processing researchers using such multimodal generative models to take advantage of this phenomenon in their ongoing work and to come up with new ideas on utilizing image generation containing blended concepts.
Supplementary Material: zip
Submission Number: 2993
Loading