Training and Inference on Any-Order Autoregressive Models the Right Way

Published: 13 Jul 2023, Last Modified: 22 Aug 2023TPM 2023EveryoneRevisionsBibTeX
Keywords: any-order autoregressive models, arbitrary conditionals, tractable probabilistic modeling
TL;DR: We improve Any-Order Autoregressive Models by selecting univariate conditionals in a clever way during training and inference.
Abstract: Conditional inference on arbitrary subsets of variables is a core problem in probabilistic inference with important applications such as masked language modeling and image inpainting. In recent years, the family of Any-Order Autoregressive Models (AO-ARMs) -- closely related to popular models such as BERT and XLNet -- has shown breakthrough performance in arbitrary conditional tasks across a sweeping range of domains. But, in spite of their success, in this paper we identify significant improvements to be made to previous formulations of AO-ARMs. First, we show that AO-ARMs suffer from redundancy in their probabilistic model, i.e., they define the same distribution in multiple different ways. We alleviate this redundancy by training on a smaller set of univariate conditionals that still maintains support for efficient arbitrary conditional inference. Second, we upweight the training loss for univariate conditionals that are evaluated more frequently during inference. Our method leads to improved performance with no compromises on tractability, giving state-of-the-art likelihoods in arbitrary conditional modeling on text (Text8), image (CIFAR10, ImageNet32), and continuous tabular data domains.
Submission Number: 1
Loading