Keywords: meta-learning, latent variable model, neural ODE, neural processes, neural ODE processes, learning using privileged information
TL;DR: Can we better meta-learn dynamics if we have access to high-level descriptions at training time? Yes.
Abstract: Neural ODE Processes approach the problem of meta-learning for dynamics using a latent variable model, which permits a flexible aggregation of contextual information. This flexibility is inherited from the Neural Process framework and allows the model to aggregate sets of context observations of arbitrary size into a fixed-length representation. In the physical sciences, we often have access to structured knowledge in addition to raw observations of a system, such as the value of a conserved quantity or a description of an understood component. Taking advantage of the aggregation flexibility, we extend the Neural ODE Process model to use additional information within the Learning Using Privileged Information setting, and we validate our extension with experiments showing improved accuracy and calibration on simulated dynamics tasks.
Proposed Reviewers: Jacob Moss, jm2311@cam.ac.uk
Alexander Norcliffe, alexander.norcliffe.20@ucl.ac.uk
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2104.14290/code)
0 Replies
Loading