Prior knowledge meets Neural ODEs: a two-stage training method for improved explainabilityDownload PDF

01 Mar 2023 (modified: 15 May 2023)Submitted to Tiny Papers @ ICLR 2023Readers: Everyone
Keywords: Neural ODE, constrained optimization, explainability, neural network, natural systems, differential equations
TL;DR: A two-stage training method to add prior knowledge in the form of constraints into Neural ODE.
Abstract: Neural Ordinary Differential Equations (ODEs) have been used extensively to model physical systems because they represent a continuous-time function that can make predictions over the entire time domain. However, most of the time, the parameters of these physical systems are subject to strict laws/constraints. But there is no guarantee that the Neural ODE model satisfies these constraints. Therefore, we propose a two-stage training for Neural ODE. The first stage aims at finding feasible parameters by minimizing a loss function defined by the constraints violation. The second stage aims at improving the feasible solution by minimizing the distance between the predicted and ground-truth values. By training the Neural ODE in two stages, we ensure that the governing laws of the system are satisfied and the model fits the data.
8 Replies

Loading