Keywords: LLM, Agent, PDE, PINN
Abstract: Physics-informed neural networks (PINNs) provide a powerful approach for solving partial differential equations (PDEs), yet constructing a usable PINN remains labor-intensive and error-prone. Scientists must interpret problems as formal PDE formulations, design appropriate network architectures and loss functions, and implement stable training pipelines. Existing large language model (LLM) approaches address isolated steps such as code generation or architecture suggestion, but typically assume that a formal PDE is already specified, thereby lacking an end-to-end perspective.
We present Lang-PINN, an LLM-driven multi-agent system that builds trainable PINNs directly from natural language task descriptions. Lang-PINN coordinates four complementary agents: a PDE Agent that parses task descriptions into symbolic PDEs, a PINN Agent that selects suitable architectures, a Code Agent that generates modular and executable implementations, and a Feedback Agent that executes code and diagnoses errors for iterative refinement.
This design transforms informal task statements into executable and verifiable PINN solvers. Experiments show that Lang-PINN achieves substantially lower errors and greater robustness than competitive baselines: mean squared error (MSE) is reduced by up to 3–5 orders of magnitude, end-to-end execution success improves by more than 50%, and overall time overhead is reduced by up to 74%.
Email Sharing: We authorize the sharing of all author emails with Program Chairs.
Data Release: We authorize the release of our submission and author names to the public in the event of acceptance.
Submission Number: 60
Loading