TL;DR: This paper explores the theoretical expressive power of GNNs for quadratic programs.
Abstract: Quadratic programming (QP) is the most widely applied category of problems in nonlinear programming. Many applications require real-time/fast solutions, though not necessarily with high precision. Existing methods either involve matrix decomposition or use the preconditioned conjugate gradient method. For relatively large instances, these methods cannot achieve the real-time requirement unless there is an effective preconditioner. Recently, graph neural networks (GNNs) opened new possibilities for QP. Some promising empirical studies of applying GNNs for QP tasks show that GNNs can capture key characteristics of an optimization instance and provide adaptive guidance accordingly to crucial configurations during the solving process, or directly provide an approximate solution. However, the theoretical understanding of GNNs in this context remains limited. Specifically, it is unclear what GNNs can and cannot achieve for QP tasks in theory. This work addresses this gap in the context of linearly constrained QP tasks. In the continuous setting, we prove that message-passing GNNs can universally represent fundamental properties of quadratic programs, including feasibility, optimal objective values, and optimal solutions. In the more challenging mixed-integer setting, while GNNs are not universal approximators, we identify a subclass of QP problems that GNNs can reliably represent.
Lay Summary: Quadratic programming (QP) is widely used for decision-making in real-world applications. Traditional QP solvers can be slow, especially for large problems. This paper studies how graph neural networks (GNNs)—machine learning models built for graph-structured data—can be used to speed up QP solutions.
The authors prove that for convex QPs with continuous variables and linear constraints, GNNs can reliably predict their feasibility, optimal objective value, and one of the optimal solutions. For more complex cases involving integer variables (mixed-integer QPs), GNNs face limitations. Still, the paper identifies specific cases where GNNs work well and offers practical criteria to check this.
These results explain why GNNs often perform well in practice and provide theoretical foundations for using them in optimization.
Link To Code: https://github.com/liujl11git/GNN-QP
Primary Area: Deep Learning->Graph Neural Networks
Keywords: Graph Neural Networks, Quadratic Programming, Learning to Optimize
Submission Number: 8964
Loading