Abstract: Sketch-based image modification is an interactive approach for image editing, where users indicate their intention of modifications in the images by drawing sketches on the input image and then the model generates the modified image based on the input sketch. Existing methods often necessitate specifying the region to be modified through a pixel-level mask, transforming the image modification process into a sketch-based inpainting task. Such approaches, however, present a limitation: the mask can cause loss of essential semantic information, compelling the model to perform restoration rather than editing the image. To address this challenge, we propose a novel mask-free image modification method, named Draw2Edit, which enables direct drawing of sketches and editing of images without pixel-level masks, simplifying the editing process. In addition, we employ the free-form deformation to generate structurally corresponding sketches and training images, effectively addressing the challenge of collecting paired sketches and images for training while enhancing the model's effectiveness for sketch-guided tasks. We evaluate our proposed method on commonly-used sketch-guided inpainting datasets, including CelebA-HQ and Places2, and demonstrate its state-of-the-art performance in both quantitative evaluation and user studies. Our code is available at https://github.com/YiwenXu/Draw2Edit.
0 Replies
Loading