CannyEdit: Selective Canny Control and Dual-Prompt Guidance for Training-free Image Editing

18 Sept 2025 (modified: 01 Dec 2025)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: image editing, Canny control, regional text guidance, training-free, seamless editing
Abstract: Recent advances in text-to-image (T2I) models have enabled training-free regional image editing by leveraging the generative priors of foundation models. However, existing methods struggle to balance text adherence in edited regions, context fidelity in unedited areas, and seamless integration of edits. We introduce ***CannyEdit***, a novel training-free framework that addresses this trilemma through two key innovations. First, *Selective Canny Control* applies structural guidance from a Canny ControlNet only to the unedited regions, preserving the original image's details while allowing for precise, text-driven changes in the specified editable area. Second, *Dual-Prompt Guidance* utilizes both a local prompt for the specific edit and a global prompt for overall scene coherence. Through this synergistic approach, these components enable controllable local editing for object addition, replacement, and removal, achieving a superior trade-off among text adherence, context fidelity, and editing seamlessness compared to current region-based methods. Beyond this, CannyEdit offers exceptional flexibility: *it operates effectively with rough masks or even single-point hints in addition tasks*. Furthermore, the framework can seamlessly integrate with vision-language models *in a training-free manner* for complex instruction-based editing that requires planning and reasoning. Our extensive evaluations demonstrate CannyEdit's strong performance against leading instruction-based editors in complex object addition scenarios.
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 11602
Loading