Revisiting stochastic submodular maximization with cardinality constraint: A bandit perspective

Published: 20 May 2024, Last Modified: 07 Jun 2024Accepted by TMLREveryoneRevisionsBibTeX
Abstract: In this paper, we focus on the problem of maximizing non-negative, monotone, stochastic submodular functions under cardinality constraint. Recent works have explored continuous optimization algorithms via multi-linear extensions for such problems and provided appropriate approximation guarantees. We take a fresh look into this problem from a discrete, (stochastic) greedy perspective under a probably approximately correct (PAC) setting, i.e., the goal is to obtain solutions whose expected objective value is greater than or equal to $(1-1/e-\epsilon){\rm OPT}-\nu$ with at least $1-\delta$ probability, where ${\rm OPT}$ is the optimal objective value. Using the theory of multi-armed bandits, we propose novel bandit stochastic greedy (BSG) algorithms in which selection of the next element at iteration $i$ is posed as a $(\nu_i,\delta_i)$-PAC best-arm identification problem. Given $(\nu,\delta)$-PAC parameters to BSG, we formally characterize a set $\mathcal{A}(\nu,\delta)$ of per-iteration policies such that any policy from this set guarantees a $(\nu,\delta)$-PAC solution for the stochastic submodular maximization problem using BSG. We next discuss the problem of learning a policy in $\mathcal{A}(\nu,\delta)$ by minimizing the computational cost. With our learned policy, we show that BSG has lower computational cost than existing stochastic submodular maximization approaches. An interesting outcome of our analysis is the development of both linear and almost-linear time algorithms for the exemplar based clustering problem with $(1-1/e-\epsilon)$-approximation guarantee under a PAC setting. Lastly, we also study the problem of learning a policy for BSG under budget setting. Experiments on various problems illustrate the efficacy of our approach in terms of optimization quality as well as computational efficiency.
Submission Length: Regular submission (no more than 12 pages of main content)
Previous TMLR Submission Url:
Changes Since Last Submission: As suggested by the Action Editor, we have incorporated the following changes: 1. The text regarding Q1, Q2, and Q3 on page 2 has been edited in accordance to the results presented in Sections 3.2, 3.3, and 3.4, respectively. 2. We have modified the wordings of Lemma 3.3, Theorem 3.5, and Theorem 3.9 to make them clearer.
Assigned Action Editor: ~Trevor_Campbell1
Submission Number: 1834