Generalized rectifier wavelet covariance models for texture synthesisDownload PDF

29 Sept 2021, 00:34 (edited 05 May 2022)ICLR 2022 PosterReaders: Everyone
  • Keywords: texture synthesis, generative models, wavelets
  • Abstract: State-of-the-art maximum entropy models for texture synthesis are built from statistics relying on image representations defined by convolutional neural networks (CNN). Such representations capture rich structures in texture images, outperforming wavelet-based representations in this regard. However, conversely to neural networks, wavelets offer meaningful representations, as they are known to detect structures at multiple scales (e.g. edges) in images. In this work, we propose a family of statistics built upon non-linear wavelet based representations, that can be viewed as a particular instance of a one-layer CNN, using a generalized rectifier non-linearity. These statistics significantly improve the visual quality of previous classical wavelet-based models, and allow one to produce syntheses of similar quality to state-of-the-art models, on both gray-scale and color textures. We further provide insights on memorization effects in these models.
  • One-sentence Summary: This paper presents a model for texture synthesis, built on a wavelet-based representation of images.
  • Supplementary Material: zip
16 Replies