Keywords: Geometric Deep Learning, PDE Identification, Any-dimensional Learning
Abstract: Machine learning methods for data-driven identification of partial differential equations (PDEs) are typically defined for a fixed number of spatial dimensions and a particular choice of coordinates in which the data have been collected. This dependence prevents the learned equation from generalizing to other spaces. In this work, we reformulate the problem in terms of coordinate- and dimension- independent representations, paving the way toward what we might call “spatially liberated" PDE learning. In this work, we propose an approach to learning PDEs by expressing them in a way that is independent of the coordinate system and even the underlying manifold where the equation is defined. This allows us to learn a PDE in low-dimensional spaces and generalize to higher-dimensional spaces with different
geometric properties. We provide extensive numerical experiments that demonstrate that our approach allows for robust transferability across various geometric contexts. We show that the dynamics learned in one space can be used, without retraining, to make accurate predictions in other spaces with different dimensions, coordinate systems, boundary conditions, and curvatures, by recomputing invariant
features.
Primary Area: learning on graphs and other geometries & topologies
Submission Number: 14220
Loading