Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
Exploring Deep Recurrent Models with Reinforcement Learning for Molecule Design
Nov 03, 2017 (modified: Nov 03, 2017)ICLR 2018 Conference Blind Submissionreaders: everyoneShow Bibtex
Abstract:Despite many advances in the area of computational molecular design, significant challenges remain to render their application commonplace. This work expands upon recent advances in deep neural networks and reinforcement learning strategies for sequence generation to investigate how to apply these techniques to the chemistry domain with a focus on challenges in drug discovery. This work proposes 19 benchmarks selected by subject experts to enable usage by those without a background in chemistry, expands smaller datasets previously used to approximately 1.1 million training molecules, and explores how to apply new reinforcement learning techniques effectively for molecular design. The benchmarks here, built as OpenAI Gym environments, will be open-sourced to encourage innovation in algorithms that can be applied to drug discovery and beyond this focused chemistry community. Finally, this work explores recent development in reinforcement-learning methods with excellent sample complexity (the A2C and PPO algorithms) and investigates their behavior in molecular generation, demonstrating significant performance gains compared to standard reinforcement learning techniques.
TL;DR:We investigate a variety of RL algorithms for molecular generation and define new benchmarks (to be released as an OpenAI Gym), finding PPO and a hill-climbing MLE algorithm work best.
Keywords:reinforcement learning, molecule design, de novo design, ppo, sample-efficient reinforcement learning
Enter your feedback below and we'll get back to you as soon as possible.