Attribute-Guided Diffusion for Unsupervised Few-Shot Font Generation

22 Sept 2023 (modified: 29 Jan 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: Diffusion Models; Font Generation
Abstract: Font generation is a challenging problem, especially for some writing systems composed of a large number of characters, such as Chinese, which has attracted the attention of many scholars in recent years. However, existing font generation methods are usually based on generative adversarial networks. Due to the problems of training instability and mode collapse in generative adversarial networks, the performance of many methods has encountered bottlenecks. In order to solve this problem, we apply the latest generative model — the diffusion model to this task. We use the method of decoupling content and style to extract image attributes, combine the required content and style with the input diffusion model as a condition, and then guide diffusion models to generate glyphs corresponding to styles. Our method can be stably trained on large datasets and our model achieves pretty good performance both qualitatively and quantitatively compared with previous font generation methods.
Primary Area: generative models
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 5033
Loading