Keywords: Reinforcement Learning, Molecular Design, Computational Fluid Dynamics
TL;DR: A method to accelerate Reinforcement Learning in costly reward scenarios.
Abstract: Transfer of recent advances in deep reinforcement learning to real-world
applications is hindered by high data demands and thus low efficiency and
scalability. Through independent improvements of components such as replay
buffers or more stable learning algorithms, and through massively distributed
systems, training time could be reduced from several days to several hours for
standard benchmark tasks. However, while rewards in simulated environments are
well-defined and easy to compute, reward evaluation becomes the bottleneck in
many real-world environments, e.g. in molecular optimization tasks, where
computationally demanding simulations or even experiments are required to
evaluate states and to quantify rewards. Therefore, training might become
prohibitively expensive without an extensive amount of computational resources
and time. We propose to alleviate this problem by replacing costly ground-truth
rewards with rewards modeled by neural networks, counteracting non-stationarity
of state and reward distributions during training with an active learning component.
We demonstrate that using our proposed ACRL method (actively learning costly
rewards for reinforcement learning), it is possible to train agents in complex
real-world environments orders of magnitudes faster. By enabling the application
of reinforcement learning methods to new domains, we show that we can find
interesting and non-trivial solutions to real-world optimization problems in
chemistry, materials science and engineering.
Paper Track: Papers
Submission Category: AI-Guided Design
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/actively-learning-costly-reward-functions-for/code)
0 Replies
Loading