On two ways to use determinantal point processes for Monte Carlo integrationDownload PDF

Guillaume Gautier, Rémi Bardenet, Michal Valko

06 Sept 2019 (modified: 05 May 2023)NeurIPS 2019Readers: Everyone
Abstract: When approximating an integral by a weighted sum of function evaluations, determinantal point processes (DPPs) provide a way to enforce repulsion between the evaluation points. This negative dependence is encoded by a kernel. Fifteen years before the discovery of DPPs, Ermakov & Zolotukhin (EZ, 1960) had the intuition of sampling a DPP and solving a linear system to compute an unbiased Monte Carlo estimator of the integral. In the absence of DPP machinery to derive an efficient sampler and analyze their estimator, the idea of Monte Carlo integration with DPPs was stored in the cellar of numerical integration. Recently, Bardenet & Hardy (BH, 2016) came up with a more natural estimator with a fast central limit theorem (CLT). In this paper, we first take the EZ estimator out of the cellar, and analyze it using modern arguments. Second, we provide an efficient implementation to sample exactly a particular multidimensional DPP called multivariate Jacobi ensemble. The latter satisfies the assumptions of the aforementioned CLT. Third, our new implementation lets us investigate the behavior of the two unbiased Monte Carlo estimators in yet unexplored regimes. Our experiments demonstrate that the choice of the kernel should be driven by a basis of functions in which the integrand is sparse or has fast-decaying coefficients. In particular, the EZ estimator perfectly integrates functions which are linear combinations of the kernel’s eigenfunctions.
Code Link: https://github.com/guilgautier/DPPy
CMT Num: 4215
0 Replies

Loading