Can Large Language Model Help Design Effective Neural Operators for Solving Partial Differential Equations?

20 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Neural operators, operator learning, partial differential equations, large language models, automated model design
Abstract: Neural operators promise mesh and resolution independent surrogates for solving partial differential equations, yet building architectures that respect equation structure and train reliably still requires substantial expert effort. We ask whether a large language model can design neural operators end to end. We present a four agent pipeline with roles Theorist, Programmer, Critic, and Refiner. The Theorist selects a mathematically grounded operator for a user specified PDE and derives its formulation. The Programmer produces a self contained PyTorch implementation. The Critic performs adversarial review to expose numerical and software issues. The Refiner applies targeted corrections. An automated PDE solver completes the loop by generating data, training the synthesized model, and reporting evaluation metrics and plots. Across extensive PDE benchmark problems, the LLM designed operators consistently outperform strong baselines and prior SOTA in accuracy and sample efficiency, while remaining stable under varied discretizations and noisy initial conditions. Ablation studies show that the Critic and Refiner steps are essential for numerical stability and generalization. These results suggest that LLMs can act as principled collaborative designers of PDE operators, translating problem statements into executable and competitive architectures and moving toward automated and theory-aware scientific machine learning.
Primary Area: applications to physical sciences (physics, chemistry, biology, etc.)
Submission Number: 23236
Loading