SKETCHCREATOR: Text-Guided Diffusion Models for Vectorized Sektch Generation and Editing

Published: 01 Jan 2023, Last Modified: 21 Apr 2025IC-NIDC 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: We present SketchCreator, a text-to-sketch generative frame-work built on diffusion models, which can produce human sketches given a text description. Specifically, sketches are represented in a sequence of stroke points, where our model aims to directly learn the distribution of these ordered points under the guidance of the prompt, i.e., text description. Uniquely, different from prior arts focusing on single-object sketch generation, our model can flexibly generate both the single sketch object and scene sketches conditioned on the prompt. Particularly, our model can generate the scene sketch uniformly without explicitly determining the layout of the scene, which is typically required by previous works. Consequently, the produced objects in a scene sketch are more reasonably organized and visually appealing. Additionally, our model can be readily applied to text-conditioned sketch editing which is of great practical usage. Experimental results on QuickDraw and FS-COCO validate the effectiveness of our model.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview