$R^3$: End-to-End Reasoning-based Planning for Multi-step Retrosynthesis via Reinforcement Learning

ACL ARR 2026 January Submission9209 Authors

06 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Multi-step Retrosynthesis, Reinforcement Learning, Large Language Models, AI for Science
Abstract: Multi-step retrosynthetic planning is a fundamental challenge in organic chemistry, traditionally modeled as a combinatorial search problem guided by single-step prediction models. However, this search-centric paradigm often disconnects from the explicit chemical reasoning processes employed by human experts. In this paper, we propose $R^3$ (**R**einforced **R**easoning **R**etrosynthesis), a novel framework that reformulates this task as end-to-end generative reasoning. Instead of traversing a search tree, $R^3$ simulates the problem-solving logic of chemists to directly generate complete synthetic pathways. To achieve this, we initialize the model with domain knowledge and employ end-to-end Reinforcement Learning (RL) to optimize the entire planning policy. Experimental results on Retrobench show that $R^3$ achieves a state-of-the-art Top-1 accuracy of 43.7\%, demonstrating that generative reasoning offers a superior alternative to traditional search algorithms in solving complex retrosynthetic problems.
Paper Type: Long
Research Area: Language Models
Research Area Keywords: applications; chain-of-thought; fine-tuning; continual learning; pre-training
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Approaches low compute settings-efficiency, Publicly available software and/or pre-trained models
Languages Studied: SMILES, IUPAC, English.
Submission Number: 9209
Loading