everyone
since 13 Oct 2023">EveryoneRevisionsBibTeX
Continuous-time dynamics models, such as neural ordinary differential equations, enable accurate modeling of underlying dynamics in time-series data. However, the use of neural networks in parameterizing dynamics makes it challenging for humans to identify dependence structures, especially in the presence of delayed effects. In consequence, these models are not an attractive option when capturing dependence carries more importance than accurate predictions, e.g., tsunami forecasting. In this paper, we present a novel method for learning dependence structures in continuous-time dynamics models. Inspired by neural graphical modeling, we promote weight sparsity in the network's first layer during training. Once trained, we prune the sparse weights to identify dependence structures. In evaluation, we first test our method in scenarios where the exact dependence-structures of time-series are known. Our method captures the underlying dependence structure precisely even when there is a delayed effects. We further evaluate our method to a real-world tsunami forecasting, where the exact dependence structures are unknown. Even in this challenging case, our method effective learns physically-consistent dependence structures with a high forecasting accuracy.