Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
RenderGAN: Generating Realistic Labeled Data
Leon Sixt, Benjamin Wild, Tim Landgraf
Nov 04, 2016 (modified: Jan 12, 2017)ICLR 2017 conference submissionreaders: everyone
Abstract:Deep Convolutional Neuronal Networks (DCNNs) are showing remarkable performance on many computer vision tasks. Due to their large parameter space, they require many labeled samples when trained in a supervised setting. The costs of annotating data manually can render the use of DCNNs infeasible. We present a novel framework called RenderGAN that can generate large amounts of realistic, labeled images by combining a 3D model and the Generative Adversarial Network framework. In our approach, image augmentations (e.g. lighting, background, and detail) are learned from unlabeled data such that the generated images are strikingly realistic while preserving the labels known from the 3D model. We apply the RenderGAN framework to generate images of barcode-like markers that are attached to honeybees. Training a DCNN on data generated by the RenderGAN yields considerably better performance than training it on various baselines.
TL;DR:We embed a 3D model in the GAN framework to generate realistic, labeled data.
Keywords:Unsupervised Learning, Computer vision, Deep learning, Applications
Enter your feedback below and we'll get back to you as soon as possible.