Discovering Non-monotonic Autoregressive Orderings with Variational InferenceDownload PDF

Published: 12 Jan 2021, Last Modified: 14 Jul 2023ICLR 2021 PosterReaders: Everyone
Keywords: variational inference, unsupervised learning, computer vision, natural language processing, optimization, reinforcement learning
Abstract: The predominant approach for language modeling is to encode a sequence of tokens from left to right, but this eliminates a source of information: the order by which the sequence was naturally generated. One strategy to recover this information is to decode both the content and ordering of tokens. Some prior work supervises content and ordering with hand-designed loss functions to encourage specific orders or bootstraps from a predefined ordering. These approaches require domain-specific insight. Other prior work searches over valid insertion operations that lead to ground truth sequences during training, which has high time complexity and cannot be efficiently parallelized. We address these limitations with an unsupervised learner that can be trained in a fully-parallelizable manner to discover high-quality autoregressive orders in a data driven way without a domain-specific prior. The learner is a neural network that performs variational inference with the autoregressive ordering as a latent variable. Since the corresponding variational lower bound is not differentiable, we develop a practical algorithm for end-to-end optimization using policy gradients. Strong empirical results with our solution on sequence modeling tasks suggest that our algorithm is capable of discovering various autoregressive orders for different sequences that are competitive with or even better than fixed orders.
One-sentence Summary: The paper proposes an unsupervised learner that discovers non-monotonic autoregressive orders for sequence generation through fully-parallelizable end-to-end training without domain-specific prior.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Supplementary Material: zip
Code: [![github](/images/github_icon.svg) xuanlinli17/autoregressive_inference](https://github.com/xuanlinli17/autoregressive_inference)
Data: [COCO](https://paperswithcode.com/dataset/coco)
11 Replies

Loading