Adaptive Submodular Policy Optimization

Published: 09 May 2025, Last Modified: 06 Sept 2025RLC 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: policy gradients, submodularity, adaptive submodularity
TL;DR: We propose KL-regularized policy optimization for adaptive submodular maximization, which is a framework for decision making under uncertainty with submodular rewards.
Abstract: We propose KL-regularized policy optimization for adaptive submodular maximization, which is a framework for decision making under uncertainty with submodular rewards. Policy optimization of adaptive submodular functions justifies a surprisingly simple and efficient policy gradient update, where the optimized action only affects its immediate reward but not the future ones. It also allows us to learn adaptive submodular policies with large action spaces, such as those represented by large language models (LLMs). We prove that our policies monotonically improve as the regularization diminishes and converge to the optimal greedy policy. Our experiments show major gains in statistical efficiency, in both synthetic problems and LLMs.
Submission Number: 341
Loading