Explicit Density Approximation for Neural Implicit Samplers Using a Bernstein-Based Convex Divergence
TL;DR: We introduce dual-ISL, a convex, likelihood-free divergence based on Bernstein polynomial projections that gives implicit generative models an explicit, closed-form density estimator with provable convergence guarantees.
Abstract: Rank-based objectives such as the invariant statistical loss (ISL) are robust, likelihood-free tools for training implicit generative models. We propose \emph{dual-ISL}, obtained by interchanging the roles of the target $p$ and model density $\tilde p$ within ISL, which induces a \emph{convex} optimization problem over model densities. We show that the associated rank-based discrepancy $d_K$ is \emph{continuous} under weak and $L^1$ convergence and \emph{convex} in its first argument, properties not shared by classical divergences such as KL or Wasserstein distances. Additionally, we prove that $d_K$ admits an $L^2$ interpretation: it is the projection of the density ratio $q=p/\tilde p$ onto a Bernstein polynomial basis. This yields explicit truncation-error bounds, sharp convergence rates, and a closed-form expression for the truncated density approximation. To handle multivariate data, we further introduce a sliced dual-ISL via random one-dimensional projections that preserves both continuity and convexity. Empirically, across several benchmarks, dual-ISL delivers faster and smoother convergence than standard ISL and offers competitive, often superior, mode coverage relative to state-of-the-art implicit models (modern GAN baselines, including multi-critic setups), while providing an explicit density approximation.
Submission Number: 575
Loading