Explicit Density Approximation for Neural Implicit Samplers Using a Bernstein-Based Convex Divergence
Abstract: Rank-based objectives such as the invariant statistical loss (ISL) are robust, likelihood-free tools for training implicit generative models. We propose dual-ISL, obtained by interchanging the roles of the target $p$ and model density $\tilde p$ within ISL, which induces a convex optimization problem over model densities. We show that the associated rank-based discrepancy $d_K$ is continuous under weak and $L^1$ convergence and convex in its first argument, properties not shared by classical divergences such as KL or Wasserstein distances. Additionally, we prove that $d_K$ admits an $L^2$ interpretation: it is the projection of the density ratio $q=p/\tilde p$ onto a Bernstein polynomial basis. This yields explicit truncation-error bounds, sharp convergence rates, and a closed-form expression for the truncated density approximation. To handle multivariate data, we further introduce a sliced dual-ISL via random one-dimensional projections that preserves both continuity and convexity. Empirically, across several benchmarks, dual-ISL delivers faster and smoother convergence than standard ISL and offers competitive, often superior, mode coverage relative to state-of-the-art implicit models (modern GAN baselines, including multi-critic setups), while providing an explicit density approximation.
Loading