Scalable Learning and MAP Inference for Nonsymmetric Determinantal Point ProcessesDownload PDF

Published: 12 Jan 2021, Last Modified: 22 Oct 2023ICLR 2021 OralReaders: Everyone
Keywords: determinantal point processes, unsupervised learning, representation learning, submodular optimization
Abstract: Determinantal point processes (DPPs) have attracted significant attention in machine learning for their ability to model subsets drawn from a large item collection. Recent work shows that nonsymmetric DPP (NDPP) kernels have significant advantages over symmetric kernels in terms of modeling power and predictive performance. However, for an item collection of size $M$, existing NDPP learning and inference algorithms require memory quadratic in $M$ and runtime cubic (for learning) or quadratic (for inference) in $M$, making them impractical for many typical subset selection tasks. In this work, we develop a learning algorithm with space and time requirements linear in $M$ by introducing a new NDPP kernel decomposition. We also derive a linear-complexity NDPP maximum a posteriori (MAP) inference algorithm that applies not only to our new kernel but also to that of prior work. Through evaluation on real-world datasets, we show that our algorithms scale significantly better, and can match the predictive performance of prior work.
One-sentence Summary: We propose scalable learning and maximum a posteriori (MAP) inference algorithms for nonsymmetric determinantal point processes (DPPs).
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Supplementary Material: zip
Code: [![github](/images/github_icon.svg) cgartrel/nonsymmetric-DPP-learning](https://github.com/cgartrel/nonsymmetric-DPP-learning) + [![Papers with Code](/images/pwc_icon.svg) 1 community implementation](https://paperswithcode.com/paper/?openreview=HajQFbx_yB)
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2006.09862/code)
10 Replies

Loading