Make Optimization Once and for All with Fine-grained Guidance

ICLR 2026 Conference Submission20959 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Optimization; Diffusion; Learning to Optimize
TL;DR: We propose fine-grained modeling on solution space. Takeaways: 1) Optimization process’s meta features do provide informa- tion for solution space modeling. 2) The data from real optimization process is helpful, but it is still not enough.
Abstract: Learning to Optimize (L2O) enhances optimization efficiency with integrated neu- ral networks. L2O paradigms achieve great outcomes, e.g., refitting optimizer, generating unseen solutions iteratively or directly. However, conventional L2O methods require intricate design and rely on real optimization processes and nu- merical optimization results, limiting scalability and generalization. Our analyses explore general framework for learning optimization, called Diff-L2O, focusing on augmenting sampled solutions from a wider view rather than local updates in real optimization process only. Meanwhile, we give the related generalization bound, showing that the sample diversity of Diff-L2O brings better performance. This bound can be simply applied to other fields, discussing diversity, mean-variance, and different tasks. Diff-L2O’s strong compatibility is empirically verified with only minute-level training, comparing with other hour-levels.
Primary Area: other topics in machine learning (i.e., none of the above)
Submission Number: 20959
Loading