Keywords: Primary-Dual Diffusion; Quadratic Programming; Consistency Distillation
TL;DR: A diffusion model simultaneously outputs both the primal and dual variables of a quadratic programming (QP) problem, combined with a posterior refinement algorithm.
Abstract: Quadratic Programming (QP) is an important class of mathematical optimization problems widely used in various fields such as economics, engineering, finance, and machine learning. Recently, with the development of Learning to Optimize , many studies have attempted to solve QP problems using Graph Neural Networks (GNNs), but they suffer from relatively poor performance compared to traditional algorithms. In this paper, we introduce the Primary-Dual Diffusion (PDD) model for solving QP problems. The model uses a diffusion approach to simultaneously learn both primary and dual variables in order to predict an accurate solution. Based on this prediction, only a small number of KKT-based correction and parallelizable post-processing iterations (e.g, PDHG, ADMM) are needed to ensure that the solution satisfies the constraints and converges to the optimal solution. Notably, our PDDQP is the first QP neural solver capable of obtaining the optimal solution. Additionally, to address the slow convergence issue of the diffusion model, we adopt a consistency distillation method to develop a one-step diffusion solver for QP. Experimental results demonstrate that our approach achieves state-of-the-art performance in learning-based QP solvers while remaining competitive with traditional methods.
Primary Area: generative models
Submission Number: 16507
Loading