TL;DR: LoopGAN extends cycle length in CycleGAN to enable unaligned sequential transformation for more than two time steps.
Abstract: We tackle the problem of modeling sequential visual phenomena. Given examples of a phenomena that can be divided into discrete time steps, we aim to take an input from any such time and realize this input at all other time steps in the sequence. Furthermore, we aim to do this \textit{without} ground-truth aligned sequences --- avoiding the difficulties needed for gathering aligned data. This generalizes the unpaired image-to-image problem from generating pairs to generating sequences. We extend cycle consistency to \textit{loop consistency} and alleviate difficulties associated with learning in the resulting long chains of computation. We show competitive results compared to existing image-to-image techniques when modeling several different data sets including the Earth's seasons and aging of human faces.
Data: [Chairs](https://paperswithcode.com/dataset/chairs)
Original Pdf: pdf
4 Replies
Loading