Overview of supplementary materials.

code
====

Please see instructions inside (README.md) for installation and running experiments. Use the notebooks to investigate trained models.

extra_predictions
================

Movies for all the sequences in the validation set, for all models.
Every set of movies corresponds to a row in results Table 2.
The number at the end of the file indicates the sequence in the validation set.
If you select multiple files and open them with e.g. VLC media player, the movies will play sequentially.

long_term_predictions
=====================

Included are very long future frame predictions (500 frames) of KeyCLD models trained on the three environments.
The layout of these movies is the same as Figure 1 in the paper:
(a) An observation of a dynamical system is processed by a learned keypoint estimator model.
(b) The model represents the positions of the keypoints with a set of spatial probability heatmaps.
(c) Cartesian coordinates are extracted using spatial softmax and used as positional state vector to learn Lagrangian dynamics.
(d) The information in the keypoint coordinates bottleneck suffices for a learned renderer model to reconstruct the original observation, including background, reflections and shadows.
The keypoint estimator model, Lagrangian dynamics models and renderer model are jointly learned unsupervised on sequences of images.

Please note that the initial velocity of these predictions was based on the first three frames.
Since the acrobot is a chaotic system, it is very sensitive to the initial velocity.

