Keywords: ai4science, physics-informed machine learning, finite elements, operator learning, interpretable scientific discovery
TL;DR: We show that existing conditional Whitney form formulations lead to trivial structure preservation, propose incorporating additive structure to achieve meaningful physics recovery and validate our approach experimentally.
Abstract: Conditional Whitney forms have recently emerged as a promising framework at the intersection of scientific machine learning and finite element analysis. They offer a solid theoretical foundation for enforcing conservation laws in complex machine learning settings. However, their use so far has been restricted to learning tasks where structural constraints can be satisfied with simple, yet inaccurate, physics representations. In this work, we analyze why existing formulations reduce to typical unconstrained reformulations, circumventing physics recovery, and highlight the necessity of incorporating additive structure pertaining to the governing physics of the system. Based on the theoretical insights we first attain, we proceed to the reformulation of the learning problem to enable data-driven physics recovery and employ conditional Whitney forms to turn a Transformer-based architecture into a structure-preserving reduced-order model. We demonstrate the validity of our theoretical insights and the effectiveness of the subsequent proposed reformulation in a range of advection-diffusion systems of increasing difficulty. Our contributions can be viewed as a step towards understanding the capacity of conditional Whitney forms to build reliable structure-preserving models by harnessing the modeling power of state-of-the-art machine learning architectures in physical sciences.
Supplementary Material: zip
Primary Area: neurosymbolic & hybrid AI systems (physics-informed, logic & formal reasoning, etc.)
Submission Number: 21357
Loading