Automated Translation and Accelerated Solving of Differential Equations on Multiple GPU Platforms

Published: 13 Nov 2023, Last Modified: 04 Oct 2024Computer Methods in Applied Mechanics and Engineering, Volume 419, 2024EveryoneCC BY-NC-ND 4.0
Abstract: We demonstrate a high-performance vendor-agnostic method for massively parallel solving of ensembles of ordinary differential equations (ODEs) and stochastic differential equations (SDEs) on GPUs. The method is integrated with a widely used differential equation solver library in a high-level language (Julia’s DifferentialEquations.jl) and enables GPU acceleration without requiring code changes by the user. Our approach achieves state-of-the-art performance compared to hand-optimized CUDA-C++ kernels while performing 20–100× faster than the vectorizing map (vmap) approach implemented in JAX and PyTorch. Performance evaluation on NVIDIA, AMD, Intel, and Apple GPUs demonstrates performance portability and vendor-agnosticism. We show composability with MPI to enable distributed multi-GPU workflows. The implemented solvers are fully featured – supporting event handling, automatic differentiation, and incorporation of datasets via the GPU’s texture memory – allowing scientists to take advantage of GPU acceleration on all major current architectures without changing their model code and without loss of performance. We distribute the software as an open-source library, DiffEqGPU.jl.
Loading