Abstract: We study online convex optimization with multiple adversarial constraints, where at each round a learner selects an action, and an adversary simultaneously reveals a convex cost function and $K$ convex constraint functions. The learner aims to minimize regret while keeping the cumulative constraint violation (CCV) of each individual constraint small. We introduce the Multi-Constraint Constrained Online Convex Optimization (MC-COCO) framework and develop a unified algorithmic approach based on exponential Lyapunov potentials. The key insight is that encoding all $K$ constraint violations via the potential $S_t = \sum_{k=1}^{K} e^{\lambda Q_k(t)}$ yields a surrogate cost whose growth ratio is controlled by the maximum single-round violation rather than the number of constraints $K$. This decoupling enables a per-constraint CCV of $\widetilde{O}(T^{1-\beta} \ln K)$, where $\beta \in [0,1]$ is a tunable regret-CCV trade-off parameter, improving qualitatively over the linear $K$-dependence of naive approaches. We instantiate the framework across three canonical settings (constrained experts, general Lipschitz-convex, and smooth convex) and further develop extensions for heterogeneous constraint prioritization (where critical constraints can be controlled at the $\widetilde{O}(T^{1-\beta}/\alpha_k)$ level) and long-term budget feasibility. Experiments on adversarial instances with up to $K=100$ constraints validate the theoretical bounds and confirm the logarithmic scaling in $K$.
Submission Type: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Silvio_Lattanzi1
Submission Number: 8111
Loading