Abstract: In augmented reality(AR) applications, it is a challenging task to generate virtual object shadows while maintaining the precision and consistency of virtual and real areas. To achieve the above target, we propose a learnable weighted recurrent generative adversarial network(LRGAN) for end-to-end shadow generation. Without any additional computational overhead, LRGAN only needs to analyze the background context to create a bridge between the target shadows and the background. Our model incorporates multiple progressive steps to recurrently compute the precise reference masks, based on which a fine-grained shadow generation module generates the shadows. A learnable weighted fusion module, which can normalize pixel values to deal with pixel overflow, fuses the generated shadows with the original image. In addition, we adopt the combined method of module training and the whole model training. Experimental results show that our proposed LRGAN not only improves the plausibility of shadow location and shape but also achieves color harmony in the shadow areas. In the absence of other prior knowledge or post-processing, it outperforms the State-of-the-Art end-to-end methods.
Loading