Keywords: image completion, generative adversarial networks, co-modulation
Abstract: Numerous task-specific variants of conditional generative adversarial networks have been developed for image completion. Yet, a serious limitation remains that all existing algorithms tend to fail when handling large-scale missing regions. To overcome this challenge, we propose a generic new approach that bridges the gap between image-conditional and recent modulated unconditional generative architectures via co-modulation of both conditional and stochastic style representations. Also, due to the lack of good quantitative metrics for image completion, we propose the new Paired/Unpaired Inception Discriminative Score (P-IDS/U-IDS), which robustly measures the perceptual fidelity of inpainted images compared to real images via linear separability in a feature space. Experiments demonstrate superior performance in terms of both quality and diversity over state-of-the-art methods in free-form image completion and easy generalization to image-to-image translation. Code is available at https://github.com/zsyzzsoft/co-mod-gan.
One-sentence Summary: Bridging the gap between between image-conditional and unconditional GAN architectures via co-modulation
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Code: [![github](/images/github_icon.svg) zsyzzsoft/co-mod-gan](https://github.com/zsyzzsoft/co-mod-gan)
Data: [COCO-Stuff](https://paperswithcode.com/dataset/coco-stuff), [CelebA-HQ](https://paperswithcode.com/dataset/celeba-hq), [FFHQ](https://paperswithcode.com/dataset/ffhq), [Places](https://paperswithcode.com/dataset/places)
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2103.10428/code)