Video2StyleGAN: Disentangling Local and Global Variations in a Video
Supplementary Videos
Video2StyleGAN
Video2StyleGAN: We present Video2StyleGAN, a video editing framework capable of generating videos from a single image. Our framework can take a driving video and transfer its global and local information to a reference image. The rotation and translation of the head are derived from the co-driving frames.
Comparison with baseline and other methods
Reference Image
Driving Video Baseline FOMM LIA TPS Ours
Reference Image
Driving Video Baseline FOMM LIA TPS Ours
Reference Image
Driving Video Baseline FOMM LIA TPS Ours
Reference Image
Driving Video Baseline FOMM LIA TPS Ours
Reference Image
Driving Video Baseline FOMM LIA TPS Ours
Comparisons with alternate methods: Comparison of our method with baseline and other state-of-the-art methods trained on video data and keypoints. Our method generates videos at high resolution (1024 x 1024) and does not require training on videos. Notice the quality of high frequency details that are preserved by our method.
StyleHEAT Results: Note that the reconstruction quality is poor given the above identities. Also note that the high frequency details are missing.