Bi-branch network with region-wise dynamic convolution for image inpainting

Published: 01 Jan 2025, Last Modified: 02 Aug 2025Neurocomputing 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: For image inpainting methods based on convolutional neural networks, once training is completed, the parameters of convolution kernels remain fixed, making it intractable to accommodate variations in the input. Additionally, these methods treat the missing and complete regions equally, leading to difficulties in accommodating the differences in feature distributions between regions. To address the aforementioned challenges, we propose a Bi-branch Network with Region-wise Dynamic Convolution (BNRDC), which consists of the upper and lower branches. The upper branch is cored with Dynamic Convolution Kernel Prediction (DCKP) module, which aims to predict dynamic convolution kernels. By generating several attention weights used for combining multiple convolution kernels, DCKP can dynamically adjust kernel parameters, responding to variations in the input. The lower branch is cored with Region Dynamic Convolution Attention (RDCA) module. RDCA performs region-wise convolutions with the help of dynamic convolution kernels to accommodate the difference in feature distributions. Furthermore, we design a Multi-Scale Feature Fusion (MSFF) module between the two branches to provide rich features for predicting dynamic convolution kernels. It fuses multi-scale features, from spatial and scale perspectives, based on dilated convolutions and a feature pyramid, respectively. Extensive experiments on Places2, CelebA and Dunhuang Challenge datasets demonstrate that the proposed method outperforms the state-of-the-art baselines. Our method can dynamically capture variations in the input and effectively align the features of missing regions with those of ground-truth images.
Loading