Keywords: mesh-free, differential operator, graph neural networks, self-supervised
TL;DR: This work introduces a self-supervised graph neural network framework for learning reusable mesh-free discrete differential operators on irregular particle stencils.
Abstract: Mesh-free numerical methods provide flexible discretisations for complex geometries, but classical discrete differential operators typically trade low computational cost for limited accuracy, or high accuracy for substantial per-stencil computation. We introduce a parametrised framework for learning mesh-free discrete differential operators using a graph neural network trained via polynomial moment constraints derived from truncated Taylor expansions. The model maps local geometric stencils directly to discrete operator weights. This demonstrates that neural networks can learn classical polynomial consistency conditions while retaining robustness to irregular neighbourhood geometry. The learned operators depend only on local geometry, are resolution-agnostic, and can be reused across particle configurations and governing equations. We evaluate the framework using standard numerical analysis diagnostics, showing improved accuracy over Smoothed Particle Hydrodynamics, and a favourable accuracy–cost trade-off relative to a representative high-order consistent mesh-free method in the moderate-accuracy regime. Applicability is demonstrated by solving the weakly compressible Navier–Stokes equations using the learned operators. An open-source implementation, including datasets and evaluation tools, is available at \url{https://github.com/uom-complexfluids/nemdo}.
Journal Opt In: Yes, I want to participate in the IOP focus collection submission
Journal Corresponding Email: lucas.gerkenstarepravo@postgrad.manchester.ac.uk
Journal Notes: Planned Extensions: The journal version will expand upon the workshop submission by integrating the extensive theoretical derivations, stability analyses, and comprehensive ablation studies currently held in the 12-page appendix. Specifically, we will incorporate the formal derivation of the loss function (Appendix A), detailed architectural specifications (Appendix C), and comprehensive ablation results (Appendix D) into the main manuscript. Furthermore, we will move the qualitative analysis of operators and PDE results, alongside stability results (Appendix E), into the primary text to provide a self-contained presentation of the proposed method.
Response to Reviewer Feedback: We will address the ICLR reviewers’ inquiries regarding equivariant architectures and Lagrangian approaches by expanding the Discussion and Conclusion. We will provide a more detailed technical justification for our current architectural choices and clarify why the current experimental suite serves as a foundational validation of the method. While we agree that 3D scaling, Lagrangian simulations, and equivariance are promising directions, we will frame these as distinct future investigations to maintain the focus on the method's core mechanics and its primary introduction to the field.
Timeline: The expanded manuscript will be ready for submission within 4 weeks, as the core experimental results and theoretical proofs are already finalized.
Submission Number: 24
Loading