Physics-Integrated Variational Autoencoders for Robust and Interpretable Generative ModelingDownload PDF

May 21, 2021 (edited Oct 25, 2021)NeurIPS 2021 PosterReaders: Everyone
  • Keywords: generative models, variational autoencoders, physics-integrated machine learning, gray-box modeling, hybrid modeling
  • TL;DR: For learning VAEs integrated with physics-based models, we propose a regularized learning method for striking a balance between neural nets and physics-based models.
  • Abstract: Integrating physics models within machine learning models holds considerable promise toward learning robust models with improved interpretability and abilities to extrapolate. In this work, we focus on the integration of incomplete physics models into deep generative models. In particular, we introduce an architecture of variational autoencoders (VAEs) in which a part of the latent space is grounded by physics. A key technical challenge is to strike a balance between the incomplete physics and trainable components such as neural networks for ensuring that the physics part is used in a meaningful manner. To this end, we propose a regularized learning method that controls the effect of the trainable components and preserves the semantics of the physics-based latent variables as intended. We not only demonstrate generative performance improvements over a set of synthetic and real-world datasets, but we also show that we learn robust models that can consistently extrapolate beyond the training distribution in a meaningful manner. Moreover, we show that we can control the generative process in an interpretable manner.
  • Supplementary Material: pdf
  • Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
  • Code: zip
11 Replies