Generative Adversarial Neural Operators

Published: 12 Oct 2022, Last Modified: 28 Feb 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: We propose the generative adversarial neural operator (GANO), a generative model paradigm for learning probabilities on infinite-dimensional function spaces. The natural sciences and engineering are known to have many types of data that are sampled from infinite- dimensional function spaces, where classical finite-dimensional deep generative adversarial networks (GANs) may not be directly applicable. GANO generalizes the GAN framework and allows for the sampling of functions by learning push-forward operator maps in infinite-dimensional spaces. GANO consists of two main components, a generator neural operator and a discriminator neural functional. The inputs to the generator are samples of functions from a user-specified probability measure, e.g., Gaussian random field (GRF), and the generator outputs are synthetic data functions. The input to the discriminator is either a real or synthetic data function. In this work, we instantiate GANO using the Wasserstein criterion and show how the Wasserstein loss can be computed in infinite-dimensional spaces. We empirically study GANO in controlled cases where both input and output functions are samples from GRFs and compare its performance to the finite-dimensional counterpart GAN. We empirically study the efficacy of GANO on real-world function data of volcanic activities and show its superior performance over GAN.
Submission Length: Regular submission (no more than 12 pages of main content)
Previous TMLR Submission Url: https://openreview.net/forum?id=ay72p3c2JA&referrer=%5BAuthor%20Console%5D(%2Fgroup%3Fid%3DTMLR%2FAuthors%23your-submissions)
Changes Since Last Submission: Dear action editor, We thank the action editor and the reviewers for their constructive reviews in the previous round. We want to address the main concerns of the reviewers. Discretization invariant: By this term, we mean that the output remains unchanged, independently of the input discretization, so long as a minimum resolution threshold is met. Moreover, the same model receives an input function at arbitrary resolution and outputs a function that can be queried at any point (in contrast to GAN which is discretization specific). GANO is discretization invariant according to this definition and we have now added this definition in the revised version. Indeed, as the action editor points out, discretization is inevitable and we are not trying to avoid it. Choice of hyperparameter for GAN: while GANO outperforms GANs, we purposely kept the discretization for the GANs to be identical to that used in GANO, rather than tuning the hyper-parameter choice for GANs independently. Baseline: Our work is the first to design generative models on function spaces, so there is currently no baseline that is discretization invariant (based on the above definition)**. We believe that GAN is the most appropriate natural baseline because we can use it to generate images at any given resolution. This is fair since we see the superiority of GANO over GAN, even when testing the models at the same resolution. In addition, we see the benefits of zero-shot super-resolution beyond the training discretization, which is simply not possible with GANs. A similar strategy was employed in early neural operator papers (Neural Operator: Graph Kernel Network for Partial Differential Equations, Fourier Neural Operator for Parametric Partial Differential Equations), where a model trained on one (or multiple) resolutions data was tested at different resolution. Similarly as with those studies, due to the lack of prior works on learning nonlinear operators, the earlier works used neural networks as their baselines. We have now added this discussion to the related works. If the action editor instructs, we can run experiments on GAN+interpolation as another baseline. However, please note that the insufficiency of interpolation is already established in prior neural operator papers. **As mentioned in the paper, prior works on generative models in infinite dimensional spaces are limited to pure memorization using an average of delta Dirac functions and does not count as a suitable choice of baseline.
Code: https://github.com/kazizzad/GANO
Assigned Action Editor: ~Marc_Lanctot1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 321
Loading