PIHLoRA: Physics-informed hypernetworks for low-ranked adaptation

Published: 27 Oct 2023, Last Modified: 11 Dec 2023AI4Mat-2023 PosterEveryoneRevisionsBibTeX
Submission Track: Papers
Submission Category: AI-Guided Design
Keywords: PINNs, Low Ranked Adaptation, Hypernetworks, PDE
Abstract: Physics-informed neural networks (PINNs) have been widely used to develop neural surrogates for solutions of Partial Differential Equations. A drawback of PINNs is that they have to be retrained with every change in initial-boundary conditions and PDE coefficients. The Hypernetwork, a model-based meta learning technique, takes in a parameterized task embedding as input and predicts the weights of PINN as output. Predicting weights of a neural network however, is a high-dimensional regression problem, and thus it is observed that hypernetworks perform sub-optimally while predicting parameters for large base networks. In this work we investigate whether we can circumvent the above issue with use of low ranked adaptation (LORA). Specifically, we use low ranked adaptation to decompose every layer of the base network into low-ranked tensors and use hypernetworks to predict the low-ranked tensors. However, we observe that the reduced dimensionality of the resulting weight-regression problem does not suffice to train the hypernetwork well. Nevertheless, addition of physics informed loss (HyperPINN) drastically improves the generalization capabilities. In order to show the efficacy of our proposed methods we consider widely used PDEs used in the domain of Material Science such as Maxwell's equation, Elasticity equation, Burger's equation, Navier-Stokes. We observe that LoRA-based HyperPINN (PIHLoRA) training allows us to learn fast solutions while having an 8x reduction in prediction parameters on average without compromising on accuracy when compared to all other baselines.
Digital Discovery Special Issue: Yes
Submission Number: 91
Loading