A Greedy Approximation for k-Determinantal Point Processes

Published: 02 May 2024, Last Modified: 30 Sept 2024AISTATS 2024EveryoneCC BY 4.0
Abstract: Determinantal point processes (DPPs) are an important concept in random matrix theory and combinatorics, and increasingly in ma- chine learning. Samples from these processes exhibit a form of self-avoidance, so they are also helpful in guiding algorithms that ex- plore to reduce uncertainty, such as in active learning, Bayesian optimization, reinforce- ment learning, and marginalization in graph- ical models. The best-known algorithms for sampling from DPPs exactly require signif- icant computational expense, which can be unwelcome in machine learning applications when the cost of sampling is relatively low and capturing the precise repulsive nature of the DPP may not be critical. We suggest an inexpensive approximate strategy for sam- pling a fixed number of points (as would typi- cally be desired in a machine learning setting) from a so-called k-DPP based on iterative in- verse transform sampling. We prove that our algorithm satisfies a (1 − 1/e) approximation guarantee relative to exact sampling from the k-DPP, and provide an efficient implementa- tion for many common kernels used in ma- chine learning, including the Gaussian and Mat ́ern class. Finally, we compare the em- pirical runtime of our method to exact and Markov-Chain-Monte-Carlo (MCMC) sam- plers and investigate the approximation qual- ity in a Bayesian Quadrature (BQ) setting.
Loading