PICL: Incorporating Coarse-Grained Data and Physics Information for Superior Physical Systems Modeling

15 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Primary Area: applications to physical sciences (physics, chemistry, biology, etc.)
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: Physics-informed machine learning, Coarse-grained data, PDEs, Neural operator
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
TL;DR: Modeling physical systems face two challenges: insufficient data and coarse-grained data quality. We propose a novel PICL framework that reconstructs the learnable fine-grained state and enhances the predictive ability in a physics-informed manner.
Abstract: Physics-informed machine learning has emerged as a promising approach for modeling physical systems. However, two significant challenges limit its real-world applicability. First, most realistic scenarios allow only coarse-grained measurements due to sensor limitations, making the use of physics loss based on finite dimensional approximations infeasible. Second, the high cost of data acquisition impedes the model's predictive ability. To address these challenges, we introduce a novel framework called Physics-Informed Coarse-grained data Learning (PICL) that incorporates physics information via the learnable fine-grained state representation from coarse-grained data. This framework effectively integrates data-driven methods with physics-informed objectives, thereby significantly improving the predictive ability of the model. The PICL framework comprises two modules: the encoding module, responsible for generating the learnable fine-grained state, and the transition module, used for predicting the subsequent state. To train these modules, we employ a base-training period followed by a two-stage fine-tuning period. The key idea behind this training strategy is that we can leverage physics loss to enhance the reconstruction ability of the encoding module and the generalization ability of the transition module, using both labeled and unlabeled data. In the base-training period, we train both modules collaboratively using data loss and physics loss. In the two-stage fine-tuning period, we first tune the transition module with physics loss using unlabeled data and then tune the encoding module with data loss using labeled data to propagate the information from the transition module to the encoding module. We demonstrate that PICL exhibits superior predictive ability across modeling various PDE-governed physical systems. Code is available on GitHub: https://github.com/PI-CL/PICL.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 76
Loading