ExLLM: Experience-Enhanced LLM Optimization for Molecular Design and Beyond

ICLR 2026 Conference Submission25386 Authors

20 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Large Language Models, Molecular Design, Evolutionary Algorithms, Discrete Optimization
TL;DR: ExLLM is an LLM-as-Optimizer with experience, offspring, and feedback mechanisms, achieving SOTA in molecular design and generalizing to diverse discrete optimization tasks with minimal problem templates.
Abstract: Molecular design involves an enormous and irregular search space, where traditional optimizers such as Bayesian optimization, genetic algorithms, and generative models struggle to leverage expert knowledge or handle complex feedback. Recently, LLMs have been used as optimizers, achieving promising results on benchmarks such as PMO. However, existing approaches rely only on prompting or extra training, without mechanisms to handle complex feedback or maintain scalable memory. In particular, the common practice of appending or summarizing experiences at every query leads to redundancy, degraded exploration, and ultimately poor final outcomes under large-scale iterative search. We introduce ExLLM, an LLM-as-optimizer framework with three components: (1) a compact, evolving experience snippet tailored to large discrete spaces that distills non-redundant cues and improves convergence at low cost; (2) a simple yet effective k-offspring scheme that widens exploration per call and reduces orchestration cost; and (3) a lightweight feedback adapter that normalizes objectives for selection while formatting constraints and expert hints for iteration. ExLLM sets new state-of-the-art results on PMO and generalizes strongly—in our setup, it sets records on circle packing and stellarator design, and yields consistent gains across additional domains—requiring only a task-description template and evaluation functions to transfer.
Primary Area: optimization
Submission Number: 25386
Loading