Incremental Learning of Multi-Domain Image-to-Image TranslationsDownload PDFOpen Website

Published: 01 Jan 2021, Last Modified: 18 Nov 2023IEEE Trans. Circuits Syst. Video Technol. 2021Readers: Everyone
Abstract: Current multi-domain image-to-image translation models assume a fixed set of domains and that all the data are always available during training. However, over time, we may want to include additional domains to our model. Existing methods either require re-training the whole model with data from all domains or require training several additional modules to accommodate new domains. To address these limitations, we present IncrementalGAN, a multi-domain image-to-image translation model that can incrementally learn new domains using only a single generator. Our approach first decouples the domain label representation from the generator to allow it to be re-used for new domains without any architectural modification. Next, we introduce a distillation loss that prevents the model from forgetting previously learned domains. Our model compares favorably against several state-of-the-art baselines.
0 Replies

Loading