Abstract: Recently, with the rapid development of artificial intelligence, image generation based on deep learning
has advanced significantly. Image generation based on Generative Adversarial Networks (GANs) is a
promising study. However, because convolutions are limited by spatial-agnostic and channel-specific,
features extracted by conventional GANs based on convolution are constrained. Therefore, GANs cannot
capture in-depth details per image. Moreover, straightforwardly stacking of convolutions causes too
many parameters and layers in GANs, yielding a high overfitting risk. To overcome the abovementioned
limitations, in this study, we propose a GANs called GIU-GANs (where Global Information Utilization:
GIU). GIU-GANs leverages a new module called the GIU module, which integrates the squeeze-and
excitation module and involution to focus on global information via the channel attention mechanism,
enhancing the generated image quality. Moreover, Batch Normalization (BN) inevitably ignores the
representation differences among noise sampled by the generator and thus degrades the generated
image quality. Thus, we introduce the representative BN to the GANs’ architecture. The CIFAR-10 and
CelebA datasets are employed to demonstrate the effectiveness of the proposed model. Numerous
experiments indicate that the proposed model achieves state-of-the-art performance.
Loading