Freezing partial source representations matters for image inpainting under limited data

Published: 01 Jan 2024, Last Modified: 25 Jan 2025Eng. Appl. Artif. Intell. 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Recent years have seen significant advances in image inpainting for any shape of missing regions. However, the performance of existing methods degrades drastically when insufficient data is given (e.g., 100), which has drawn limited attention in the community. This work provides an appropriate solution for image inpainting on the challenging limited data regime. Specifically, we first make an in-depth comparison of fine-tuning and training from scratch and find that, although the former advances the performance than the latter, the overall structural consistency and fine details are still unsatisfactory. Consequently, we propose a two-stage method based on transfer learning, namely T2inpaint. To capture the global structures of the target domain, we exclusively refine the domain-specific weights during the first stage, directing the model’s attention towards the acquisition of high-level features. Subsequently, in the second stage, we train additional parameters integrated into the frozen model from the first phase. This approach aims to attain detailed textures while concurrently alleviating overfitting. As a result, the reusable knowledge from the source domain plays a crucial role in guiding the optimization process, preventing the inclusion of ambiguous content. Extensive experiments conducted on various low-data regime datasets demonstrate that our T2inpaint produces plausible images, achieving state-of-the-art performance, particularly in scenarios with training data below 100 samples. Meanwhile, rich ablation studies elucidate the nuanced aspects of our approach. Moreover, an empirical study on the source domains, data regimes, and various data augmentation is conducted, facilitating potential interesting works.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview