PitStop: Physics-Informed Training with Gradient Stopping

ICLR 2026 Conference Submission14035 Authors

18 Sept 2025 (modified: 21 Nov 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Optimization, Physics-Informed Training
Abstract: Physics-informed learning offers a powerful approach for modeling physical systems by enforcing governing equations directly within the training process. However, optimizing such models remains inherently challenging, especially for large systems, due to the ill-conditioned nature of these residual-based loss functions. In this paper, we critically examine the limitations of classical optimization techniques by developing a comprehensive theoretical framework for physics-informed setups, including insights on convergence guarantees, convergence speed and fixed points. Next, we introduce PitStop, a novel optimization method for physics-informed training based on gradient stopping, which overcomes the limitations of classical methods by backpropagating feedback differently from the standard chain rule of calculus. The method is motivated and mathematically analyzed in our theoretical framework, incurs no additional computational cost compared to standard gradients, and achieves superior results in our experiments. Our work paves the way for more scalable and reliable physics-informed model training by fundamentally rethinking optimization paradigms.
Supplementary Material: zip
Primary Area: applications to physical sciences (physics, chemistry, biology, etc.)
Submission Number: 14035
Loading