Abstract: A key appeal of the recently proposed Neural Ordinary Differential Equation (ODE) framework is that it seems to provide a continuous-time extension of discrete residual neural networks.
As we show herein, though, trained Neural ODE models actually depend on the specific numerical method used during training.
If the trained model is supposed to be a flow generated from an ODE, it should be possible to choose another numerical solver with equal or smaller numerical error without loss of performance.
We observe that if training relies on a solver with overly coarse discretization, then testing with another solver of equal or smaller numerical error results in a sharp drop in accuracy.
In such cases, the combination of vector field and numerical method cannot be interpreted as a flow generated from an ODE, which arguably poses a fatal breakdown of the Neural ODE concept.
We observe, however, that there exists a critical step size beyond which the training yields a valid ODE vector field.
We propose a method that monitors the behavior of the ODE solver during training to adapt its step size, aiming to ensure a valid ODE without unnecessarily increasing computational cost.
We verify this adaption algorithm on a common bench mark dataset as well as a synthetic dataset.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
One-sentence Summary: We explain why some Neural ODE models do not permit a continuous-depth interpretation after training and how to fix it.
23 Replies
Loading