Requires Jax, TensorFlow-datasets, inference-gym (see paper), numpyro. 


Inference tasks:
The models folder contains a few datasets and the logistic regression model. The other two models are from the inference gym.
To generate the main result from Fig 1 comparing all methods:
- Run main_his.py with the boundmode wanted (AIS == UHA, IW, HIS-es, HIS-ev, HISLR-es, HISLR-ev). 'es' means single epsilon common to all dimensions, 'ev' means one epsilon per dimension. HIS is actually HVAE (sorry for this), HISLR is the HIS described in the paper (LR for learned reversal). The paper compares HIS-es, HISLR-es, AIS (which is UHA), IW. And also Hamiltonian Annealed importance sampling. For this, need to run main_EAIS.py.

To run code tuning more parameters, run main_extrapol_bridge.py. This already uses the extrapolation rule described in the paper. The parameters to tune are selected via arguments when running. tune_md means tune the momentum covariance (1 or 0), tune_beta, tune_vd (further tune variational distribution), mode_eps (single or affine function), mode_br (if bridging densities have extra parameters or not).

VAE:
- First run main_save_base.py with the datasetyou want (mnist, kmnist, emnist/letters). This should download the dataset automatically and store it somewhere to use it again later.
- Then, run main_joint.py (and verify all parameters wanted).

To get the results in the paper, many runs with different sets of parameters are needed, we recommend using a cluster of GPUs.
