DreamPose: Fashion Image-to-Video Synthesis via Stable Diffusion

Published: 01 Jan 2023, Last Modified: 04 Mar 2025ICCV 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: We present DreamPose, a diffusion-based method for generating animated fashion videos from still images. Given an image and a sequence of human body poses, our method synthesizes a video containing both human and fabric motion. To achieve this, we transform a pre-trained text-to-image model (Stable Diffusion [16]) into a pose-and-image guided video synthesis model, using a novel finetuning strategy, a set of architectural changes to support the added conditioning signals, and techniques to encourage temporal consistency. We fine-tune on a collection of fashion videos from the UBC Fashion dataset [50]. We evaluate our method on a variety of clothing styles and poses, and demonstrate that our method produces state-of-the-art results on fashion video animation. Video results are available on our project page: https://grail.cs.washington.edu/projects/dreampose
Loading