LLaMoCo: Instruction Tuning of Large Language Models for Optimization Code Generation

ICLR 2025 Conference Submission6367 Authors

26 Sept 2024 (modified: 02 Dec 2024)ICLR 2025 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Large Language Models, Instruction Tuning, Optimization Code Generation
TL;DR: This paper introduces a novel paradigm to instruction tune general LLMs as competitive optimizers for optimization code generation.
Abstract: Recent research on optimization using large language models (LLMs) typically involves either iterative next-step solution seeking or directly prompting LLMs to generate critical optimization codes. However, these methods often suffer from low computational efficiency, high sensitivity to prompt design, and a lack of domain-specific knowledge. We introduce LLaMoCo, the first instruction-tuning framework designed to adapt LLMs for solving optimization problems in a code-to-code manner. LLaMoCo features a comprehensive instruction set that includes code-style problem descriptions as input prompts and robust optimization codes from expert optimizers as target outputs. We then develop a novel two-phase learning strategy with a contrastive learning-based warm-up to enhance convergence during instruction tuning. Extensive experiments demonstrate that a CodeGen (350M) model tuned by our LLaMoCo yields a powerful domain-specific model for generating expert-level optimizers, achieving superior performance compared to GPT-4 Turbo and other competitors on both synthetic and realistic problem sets. The trained model and the usage instructions are available online.
Primary Area: optimization
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 6367
Loading