Physics-Informed Learning Near Critical Transitions: A Comparative Study of UDEs and Neural ODEs

Published: 24 Sept 2025, Last Modified: 26 Dec 2025NeurIPS2025-AI4Science PosterEveryoneRevisionsBibTeXCC BY 4.0
Additional Submission Instructions: For the camera-ready version, please include the author names and affiliations, funding disclosures, and acknowledgements.
Track: Track 1: Original Research/Position/Education/Attention Track
Keywords: Phase Transitions, Activation Function, Scientific Machine Learning, UDE, Neural ODE, Chaos, Bifurcation, Interpretability
TL;DR: In this study, we show UDEs outperform Neural ODEs in modeling neural dynamics across order–chaos transitions, achieving lower errors and robustness, but with trade-offs in interpretability near critical points.
Abstract: We test a central hypothesis in physics-informed machine learning: that explicitly incorporating known physical structure enables superior learning near critical transitions, but at a cost to mechanistic interpretability. Neural systems exhibit rich computational behavior near critical transitions between ordered and chaotic dynamics. Learning these transitions poses unique challenges due to slow dynamics, sensitivity to parameters, and multi-scale temporal structure. We systematically compare Universal Differential Equations (UDEs) and Neural ODEs for learning a two-dimensional neural dynamical system across stability regimes. Through Lyapunov landscape analysis, we demonstrate that activation function choice fundamentally shapes bifurcation structure, with Swish enabling smooth order-to-chaos transitions unlike ReLU or sigmoid. Our comprehensive evaluation confirms the hypothesis: UDEs consistently outperform Neural ODEs, achieving $2$--$10\times$ lower RMSE across all coupling strengths and superior robustness under external perturbations. Critically, both methods struggle near transition points ($\lambda \sim 0$), but UDEs maintain better performance. Surprisingly, while UDEs excel at dynamics prediction, they fail to accurately reconstruct the underlying activation function, revealing fundamental trade-offs between system-level learning and component interpretability in physics-informed approaches.
Submission Number: 365
Loading