PINNs-Torch: Enhancing Speed and Usability of Physics-Informed Neural Networks with PyTorch

Published: 31 Oct 2023, Last Modified: 10 Nov 2023DLDE III PosterEveryoneRevisionsBibTeX
Keywords: Physics-informed neural networks, PINNs, PyTorch, CUDA Graph, JIT
TL;DR: We introduce "PINNs-Torch", enhancing PyTorch's PINNs implementation speed by up to 9x compared to TensorFlow through CUDA Graph and JIT compilers.
Abstract: Physics-informed neural networks (PINNs) stand out for their ability in supervised learning tasks that align with physical laws, especially nonlinear partial differential equations (PDEs). In this paper, we introduce "PINNs-Torch", a Python package that accelerates PINNs implementation using the PyTorch framework and streamlines user interaction by abstracting PDE issues. While we utilize PyTorch's dynamic computational graph for its flexibility, we mitigate its computational overhead in PINNs by compiling it to static computational graphs. In our assessment across 8 diverse examples, covering continuous, discrete, forward, and inverse configurations, naive PyTorch is slower than TensorFlow; however, when integrated with CUDA Graph and JIT compilers, training speeds can increase by up to 9 times relative to TensorFlow implementations. Additionally, through a real-world example, we highlight situations where our package might not deliver speed improvements. For community collaboration and future developments, our package code is accessible at:
Submission Number: 4