Keywords: submodular, multi-armed bandit, bandit feedback, best-arm identification, combinatorial optimization
Abstract: We address the problem of submodular maximization where objective function $f:2^U\to\mathbb{R}_{\geq 0}$ can only be accessed through i.i.d noisy queries. This problem arises in many applications including influence maximization, diverse recommendation systems, and large-scale facility location optimization. We propose an efficient adaptive sampling strategy, called Confident Sample (CS), that is inspired by algorithms for best-arm-identification in multi-armed bandit, which significantly improves sample efficiency. We integrate CS into existing approximation algorithms for submodular maximization, resulting in algorithms with approximation guarantees arbitrarily close to the standard value oracle setting that are highly sample-efficient. We propose and analyze sample-efficient algorithms for monotone submodular maximization with cardinality and matroid constraints, as well as unconstrained non-monotone submodular maximization. Our theoretical analysis is complemented by empirical evaluation on real instances, demonstrating the superior sample efficiency of our proposed algorithm relative to alternative approaches.
Primary Area: optimization
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 8027
Loading