Omega: Optimistic EMA Gradients

Published: 03 Jul 2023, Last Modified: 03 Jul 2023LXAI @ ICML 2023 Regular Deadline OralEveryoneRevisionsBibTeX
Keywords: Min-Max Optimization, Optimistic Gradient Method, Stochastic Optimization
TL;DR: We introduce Omega, a variant of the optimistic gradient method for stochastic min-max optimization.
Abstract: Stochastic min-max optimization has gained interest in the machine learning community with the advancements in GANs and adversarial training. Although game optimization is fairly well understood in the deterministic setting, some issues persist in the stochastic regime. Recent work has shown that stochastic gradient descent-ascent methods such as the optimistic gradient are highly sensitive to noise or can fail to converge. Although alternative strategies exist, they can be prohibitively expensive. We introduce Omega, a method with optimistic-like updates that mitigates the impact of noise by incorporating an EMA of historic gradients in its update rule. We also explore a variation of this algorithm that incorporates momentum. Although we do not provide convergence guarantees, our experiments on stochastic games show that Omega outperforms the optimistic gradient method when applied to linear players.
Submission Type: Non-Archival
Submission Number: 3
Loading