What does it take to generate natural textures?

Ivan Ustyuzhaninov *, Wieland Brendel *, Leon Gatys, Matthias Bethge

Nov 05, 2016 (modified: Mar 03, 2017) ICLR 2017 conference submission readers: everyone
  • Abstract: Natural image generation is currently one of the most actively explored fields in Deep Learning. Many approaches, e.g. for state-of-the-art artistic style transfer or natural texture synthesis, rely on the statistics of hierarchical representations in supervisedly trained deep neural networks. It is, however, unclear what aspects of this feature representation are crucial for natural image generation: is it the depth, the pooling or the training of the features on natural images? We here address this question for the task of natural texture synthesis and show that none of the above aspects are indispensable. Instead, we demonstrate that natural textures of high perceptual quality can be generated from networks with only a single layer, no pooling and random filters.
  • TL;DR: Natural textures of high perceptual quality can be generated from networks with only a single layer, no pooling and random filters.
  • Paperhash: |what_does_it_take_to_generate_natural_textures
  • Conflicts: bethgelab.org, uni-tuebingen.de
  • Authorids: ivan.ustyuzhaninov@bethgelab.org, wieland.brendel@bethgelab.org, leon.gatys@bethgelab.org, matthias.bethge@bethgelab.org
  • Keywords: Deep learning, Unsupervised Learning

Loading