Keywords: Variational Inequality, Hidden Monotone, Surrogate, Min-max, Projected Bellman Error
TL;DR: A novel surrogate loss approach to solving variational inequalities with function approximation with theoretical guarantees.
Abstract: Deep learning has proven to be effective in a wide variety of loss minimization problems.
However, many applications of interest, like minimizing projected Bellman error and min-max optimization, cannot be modelled as minimizing a scalar loss function but instead correspond to solving a variational inequality (VI) problem.
This difference in setting has caused many practical challenges as naive gradient-based approaches from supervised learning tend to diverge and cycle in the VI case.
In this work, we propose a principled surrogate-based approach compatible with deep learning to solve VIs.
We propose a surrogate-based approach that is principled in the VI setting and compatible with deep learning.
We show that our approach has three main benefits: (1) it guarantees linear convergence under sufficient descent in the surrogate when hidden monotone structure is present (e.g. convex-concave in with respect to model predictions), (2) it provides a unifying perspective of existing methods, and (3) is amenable to existing deep learning optimizers like ADAM.
Submission Number: 72
Loading