Abstract: Existing image inpainting methods have shown promising results for regular and small-area breaks. However, restoration of irregular and large-area damage is still tricky and achieves mediocre results due to the lack of restrictions on the center of the hole. In contrast, face inpainting is also problematic due to facial structure and texture complexity, which always results in structural confusion and texture blurring. We propose an attention embedded adversarial generative network (AE-GAN) in the paper to solve this problem. Overall our framework is a U-shape GAN model. To enable the network to capture the practical features faster to reconstruct the content of the masked region in the face image, we also embed the attention mechanism that simplifies the Squeeze-and-Excitation channel attention mechanism and then set it reasonably in our generator. Our generator is chosen the U-net structure as a backbone. Because the structure can encode information from low-level pixels context features to high-level semantic features. And it can decode the features back into an image. Experiments on CelebA-HQ datasets demonstrate that our proposed method generates higher quality inpainting, results in consistent and harmonious facial structures and appearance than existing methods, and achieves state-of-the-art performance.
0 Replies
Loading