Modular Lagrangian Neural Networks: Designing Structures of Networks with Physical Inductive BiasesDownload PDF

29 Sept 2021 (modified: 13 Feb 2023)ICLR 2022 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Representation Learning, Physical Inductive Bias, Lagrangian Mechanics, Dynamics, Extrapolation
Abstract: Deep learning struggles at extrapolation in many cases. This issue happens when it comes to untrained data domains or different input dimensions and becomes more common in physical problems. Leveraging physical inductive biases can help relieve this issue and generalise the laws of physics. Based on this idea, we proposed a kind of structural neural network called Modular Lagrangian Neural Networks (ModLaNNs). This model can learn from the dynamics of simpler systems such as three-body systems and extrapolate to more complex ones like multi-body systems, which is not feasible using other relevant physical-informed neural networks. We tested our model on double-pendulum or three-body systems and reached the best results compared with our counterparts. At the same time, we directly applied our trained models to predict the motion of multi-pendulum and multi-body systems, demonstrating the intriguing performance in the extrapolation of our method.
One-sentence Summary: We proposed a physics-informed neural network structure for learning laws of physics of the dynamical systems and extrapolating to more complex systems with the same laws.
7 Replies

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview