Variational excess risk bound for general state space models

TMLR Paper2002 Authors

02 Jan 2024 (modified: 14 May 2024)Decision pending for TMLREveryoneRevisionsBibTeX
Abstract: In this paper, we consider variational autoencoders (VAE) for general state space models. We consider a backward factorization of the variational distributions to analyze the excess risk associated with VAE. Such backward factorizations were recently proposed to perform online variational learning and to obtain upper bounds on the variational estimation error. When independent trajectories of sequences are observed and under strong mixing assumptions on the state space model and on the variational distribution, we provide an oracle inequality explicit in the number of samples and in the length of the observation sequences. We then derive consequences of this theoretical result. In particular, when the data distribution is given by a state space model, we provide an upper bound for the Kullback-Leibler divergence between the data distribution and its estimator and between the variational posterior and the estimated state space posterior distributions. Under classical assumptions, we prove that our results can be applied to Gaussian backward kernels built with dense and recurrent neural networks.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Tom_Rainforth1
Submission Number: 2002
Loading