You Shall Pass: Dealing with the Zero-Gradient Problem in Predict and Optimize for Convex Optimization

Published: 27 Jun 2024, Last Modified: 20 Aug 2024Differentiable Almost EverythingEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Predict and Optimize, Differential Optimization, Machine Learning, ICML
TL;DR: This paper identifies and proposes solution to the problem of zero-gradient in convex differential optimization
Abstract: In predict and optimize, machine learning models are trained to predict parameters of optimization problems using task performance as the objective. A key challenge is computing the Jacobian of the solution with respect to its parameters. While linear problems typically use approximations due to a zero or undefined Jacobian, non-linear convex problems often utilize the exact Jacobian. This paper demonstrates that the zero-gradient issue also occurs in the non-linear case and introduces a smoothing technique which, combined with quadratic approximation and projection distance regularization, solves the zero-gradient problem. Experiments on a portfolio optimization problem confirm the method's efficiency.
Submission Number: 7
Loading