## Symplectic Adjoint Method for Exact Gradient of Neural ODE with Minimal Memory

21 May 2021, 20:44 (modified: 20 Oct 2021, 08:19)NeurIPS 2021 PosterReaders: Everyone
Keywords: ordinary differential equation, neural ODE, adjoint method, backpropagation, checkpointing scheme, symplectic integrator
TL;DR: Neural ordinary difference equation consumes large memory or takes a long time to obtain its gradient for training. The adjoint method leveraging a symplectic integrator suppresses the both bottlenecks.
Abstract: A neural network model of a differential equation, namely neural ODE, has enabled the learning of continuous-time dynamical systems and probabilistic distributions with high accuracy. The neural ODE uses the same network repeatedly during a numerical integration. The memory consumption of the backpropagation algorithm is proportional to the number of uses times the network size. This is true even if a checkpointing scheme divides the computation graph into sub-graphs. Otherwise, the adjoint method obtains a gradient by a numerical integration backward in time. Although this method consumes memory only for a single network use, it requires high computational cost to suppress numerical errors. This study proposes the symplectic adjoint method, which is an adjoint method solved by a symplectic integrator. The symplectic adjoint method obtains the exact gradient (up to rounding error) with memory proportional to the number of uses plus the network size. The experimental results demonstrate that the symplectic adjoint method consumes much less memory than the naive backpropagation algorithm and checkpointing schemes, performs faster than the adjoint method, and is more robust to rounding errors.
Supplementary Material: pdf
Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.