Track: Findings and open challenges
Keywords: fluid mechanics, Lagrangian, particle-based, graph neural networks
TL;DR: We investigate different approaches to learning weakly-compressible quasi-Lagrangian turbulence using graph neural networks, demonstrating that learned solvers can capture turbulence statistics more accurately than classical ones.
Abstract: Lagrangian, or particle-based, fluid mechanics methods are the dominant numerical tool for simulating complex boundaries, solid-fluid interactions, and multi-phase flows. While their counterpart, the Eulerian framework, has seen significant progress in learning turbulence closures – such as large eddy simulation (LES) modeling – turbulence modeling in the Lagrangian framework has been far less successful. In this paper, we first explain why preserving the correct energy spectrum, crucial for analyzing turbulence, is fundamentally impossible in a fully Lagrangian description. This limitation necessitates using quasi-Lagrangian schemes – methods that adjust the evolution of fluid particle positions beyond their physical velocity to improve accuracy. However, manually designing such corrections is challenging, motivating data-driven approaches. To this end, we are the first to investigate machine-learned quasi-Lagrangian fluid dynamics surrogates. Our experiments are on a new quasi-Lagrangian 2D turbulent Kolmogorov dataset, where velocities from a high-fidelity direct numerical simulation (DNS) solver are spectrally interpolated onto fluid particles, interleaved with particle relaxations to achieve weakly compressible fluid dynamics. We compare six machine-learning parametrizations for evolving the positions and velocities of particles. Our results show that learning simple unconstrained correction terms yields coarse-grained simulations that align well with the reference high-fidelity simulation.
Presenter: ~Nicholas_Gao1
Submission Number: 22
Loading