Minorization-Maximization for Learning Determinantal Point Processes

Published: 06 Nov 2023, Last Modified: 06 Nov 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: A determinantal point process (DPP) is a powerful probabilistic model that generates diverse random subsets from a ground set. Since a DPP is characterized by a positive definite kernel, a DPP on a finite ground set can be parameterized by a kernel matrix. Recently, DPPs have gained attention in the machine learning community and have been applied to various practical problems; however, there is still room for further research on the learning of DPPs. In this paper, we propose a simple learning rule for full-rank DPPs based on a minorization-maximization (MM) algorithm, which monotonically increases the likelihood in each iteration. We show that our minorizer of the MM algorithm provides a tighter lower-bound compared to an existing method locally. We also generalize the algorithm for further acceleration. In our experiments on both synthetic and real-world datasets, our method outperforms existing methods in most settings. Our code is available at https://github.com/ISMHinoLab/DPPMMEstimation.
Submission Length: Regular submission (no more than 12 pages of main content)
Code: https://github.com/ISMHinoLab/DPPMMEstimation
Supplementary Material: zip
Assigned Action Editor: ~Roman_Garnett1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 1153
Loading