Keywords: Agentic Reinforcement Learning, Agentic Memory, Large Language Models, Lead Optimization
Abstract: In drug discovery, lead optimization aims to iteratively refine a lead compound to improve molecular properties while preserving structural similarity to the original molecule. However, each oracle evaluation is expensive, making sample efficiency a key challenge for existing methods under a limited oracle budget. Trial-and-error approaches require many oracle calls, while methods that leverage external knowledge tend to reuse familiar templates and struggle on challenging objectives. A key missing piece is long-term memory that can ground decisions and provide reusable insights for future optimizations. To address this, we present MARLO (\textbf{M}emory-augmented \textbf{A}gentic \textbf{R}einforcement Learning for \textbf{L}ead \textbf{O}ptimization), a multi-turn agentic reinforcement learning (RL) framework with a dual-memory system. Specifically, \method{} uses Static Exemplar Memory to retrieve relevant exemplars for cold-start grounding, and Evolving Skill Memory to distill successful trajectories into reusable strategies. Built on this memory-augmented formulation, we train the policy with dense step-wise rewards, turning costly rollouts into long-term knowledge that improves future optimization. Extensive experiments show that \method{} achieves 90\% success on single-property tasks (1.5$\times$ over the best baseline) and 52\% on multi-property tasks using only 500 oracle calls. Our code is available at \url{https://anonymous.4open.science/r/MARLO/}.
Paper Type: Long
Research Area: AI/LLM Agents
Research Area Keywords: AI/LLM Agents, NLP Applications, Clinical and Biomedical Applications
Contribution Types: NLP engineering experiment
Languages Studied: English
Submission Number: 9730
Loading