Mirror Descent Methods with Weighting Scheme for Outputs for Constrained Variational Inequality Problems
Keywords: Mirror-descent, Convex function, Variational Inequality, Monotone operator, Weighting scheme, Inequality type constraints
Abstract: Variational inequalities play a key role in machine learning research such as generative adversarial networks, supervised/unsupervised learning, reinforcement learning, adversarial training, and generative models. This paper is devoted to the variational inequality problems. We consider two classes of problems, the first is classical constrained variational inequality and the second is the same problem with functional (inequality type) constraints. To solve these problems, we propose mirror descent-type methods with a weighting scheme for the generated points in each iteration of the algorithms. This scheme assigns smaller weights to the initial points and larger weights to the most recent points, thus it improves the convergence rate of the proposed methods. For the variational inequality problem with functional constraints, the proposed method switches between adaptive and non-adaptive steps in the dependence on the values of the functional constraints at iterations. We analyze the proposed methods for the time-varying step sizes and prove the optimal convergence rate for variational inequality problems with bounded and monotone operators. The results of numerical experiments of the proposed methods for classical constrained variational inequality problems show a significant improvement over the modified projection method.
Submission Number: 19
Loading