Keywords: Learn-to-optimize, Polar Learning, Hard Constraints
TL;DR: We propose a Homeomorphic Polar Learning (HoP) to solve the hard constrained optimization problem without extra correction
Abstract: Constrained optimization demands highly efficient solvers, which promotes the development of learn-to-optimize (L2O) approaches. As a data-driven method, L2O leverages neural networks to efficiently produce approximate solutions. However, a significant challenge remains in ensuring both optimality and feasibility of neural network's output. To tackle this issue, we introduce Homeomorphic Polar Learning (HoP) to solve the hard-constrained optimization by embedding a homeomorphic mapping in neural networks. The bijective structure enables end-to-end training without extra penalty or correction. For performance evaluation, we evaluate HoP's performance across a variety of synthetic optimization tasks and real-world applications in wireless communications. Across synthetic and real tasks, HoP achieves zero violations while remaining competitive in optimality and significantly faster than classical solvers: on polygon-constrained sinusoidal QP it matches SLSQP within $16\times$ speedup; on high-dimensional semi-unbounded problems it is tens of times faster than the optimizer with comparable or better objectives; and on QoS-MISO WSR it preserves $0\%$ violations with $11\times$ speedup over SCS+FP. These results demonstrate that HoP provides a practical, general, and strictly feasible alternative to penalty-based or projection-based L2O methods.
Primary Area: other topics in machine learning (i.e., none of the above)
Submission Number: 6474
Loading