Abstract: Image diffusion models have facilitated the generation of visually compelling images, and this powerful generative capability has also opened new avenues for tasks such as image morphing. Previous diffusion model-based image morphing approaches primarily focus on interpolating text embeddings and latent vectors. However, directly interpolating high-level features ignores explicit shape control, leading to morphing processes that lack smooth shape transitions. Moreover, unconstrained interpolation can drive results outside the diffusion model’s domain, resulting in noticeable artifacts. To address these issues, we propose a novel diffusion model-based method that ensures smooth shape transitions. To overcome the lack of explicit shape control, our key idea is to utilize normal maps as geometric guidance for precise image morphing. By integrating 3D reconstruction techniques with variational implicit surface methods, our approach ensures smoother and more stable morphing sequences, preserving shape consistency throughout the transformation. Comparative experiments demonstrate that our method produces smooth, consistent, and stable results and outperforms existing SOTA techniques, such as IMPUS and DiffMorpher, especially when dealing with images with large shape differences.
External IDs:dblp:journals/vc/JiangWXSJ25
Loading