Marginal Tail-Adaptive Normalizing FlowsDownload PDF

Published: 28 Jan 2022, Last Modified: 22 Oct 2023ICLR 2022 SubmittedReaders: Everyone
Keywords: Normalizing Flows, Heavy-Tailed Data, Generative Models
Abstract: Learning the tail behavior of a distribution is a notoriously difficult problem. The number of samples from the tail is small, and deep generative models, such as normalizing flows, tend to concentrate on learning the body of the distribution. In this paper, we focus on improving the ability of normalizing flows to correctly capture the tail behavior and, thus, form more accurate models. We prove that the marginal tailedness of a triangular flow can be controlled via the tailedness of the marginals of the base distribution of the normalizing flow. This theoretical insight leads us to a novel type of triangular flows based on learnable base distributions and data-driven permutations. Since the proposed flows preserve marginal tailedness, we call them marginal tail-adaptive flows (mTAFs). An empirical analysis on synthetic data shows that mTAF improves on the robustness and efficiency of vanilla flows and—motivated by our theory—allows to successfully generate tail samples from the distributions. More generally, our experiments affirm that a careful choice of the base distribution is an effective way to introducing inductive biases to normalizing flows.
One-sentence Summary: We develop novel theoretical results about the tail behavior of the distributions learned by affine triangular flows, which motivate marginal tail-adaptive flows.
Supplementary Material: zip
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2206.10311/code)
10 Replies

Loading