Autoregressive Graph Network for Learning Multi-step PhysicsDownload PDF

Published: 01 Feb 2023, Last Modified: 13 Feb 2023Submitted to ICLR 2023Readers: Everyone
Keywords: graph network, autoregressive model, physics simulation, forward model, inverse model
TL;DR: An Autoregressive Graph Network (GN) that learns forward particle-based physics using inductive biases.
Abstract: In this work, we propose an Autoregressive Graph Network~(AGN) that learns forward physics using a temporal inductive bias. Currently, temporal state space information is provided as additional input to a GN when generating roll-out physics simulations. While this relatively increases the network's predictive performance over multiple time steps, a temporal model enables the network to induce and learn temporal biases. In dynamical systems, the arrow of time simplifies possible interactions in the sense that we can assume current observations to be dependent on preceding states. The autoregressive property naturally induces the arrow of time and can further constrain physics-induced GNs to conserve symmetries over long time-steps. Our proposed GN encodes temporal state information using an autoregressive encoder that can parallelly compute latent temporal embeddings over multiple time steps during a single forward pass. We perform case studies that compare multi-step forward predictions against baseline data-driven GNs across diverse datasets that feature different particle interactions. Our approach outperforms the state of the art GN and physics-induced GNs in 9 out of 10 and in 7 out of 10 particle physics datasets when conditioned on optimal historical states.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Deep Learning and representational learning
Supplementary Material: zip
20 Replies

Loading