Abstract: Training a deep neural network to maximize a target objective has become the standard recipe for successful machine learning over the last decade. These networks can be optimized with supervised learning, if the target objective is differentiable.
For many interesting problems, this is however not the case. Common objectives like intersection over union (IoU), bilingual evaluation understudy (BLEU) score or rewards cannot be optimized with supervised learning.
A common workaround is to define differentiable surrogate losses, leading to suboptimal solutions with respect to the actual objective.
Reinforcement learning (RL) has emerged as a promising alternative for optimizing deep neural networks to maximize non-differentiable objectives in recent years.
Examples include aligning large language models via human feedback, code generation, object detection or control problems.
This makes RL techniques relevant to the larger machine learning audience. The subject is, however, time intensive to approach due to the large range of methods, as well as the often very theoretical presentation.
In this introduction, we take an alternative approach, different from classic reinforcement learning textbooks. Rather than focusing on tabular problems, we introduce reinforcement learning as a generalization of supervised learning, which we first apply to non-differentiable objectives and later to temporal problems.
Assuming only basic knowledge of supervised learning, the reader will be able to understand state‐of-the-art deep RL algorithms like proximal policy optimization (PPO) after reading this tutorial.
Submission Length: Long submission (more than 12 pages of main content)
Changes Since Last Submission: Updated draft for the rebuttal. Changes are highlighted in red color.
Details on the changes and their reasoning can be found in the discussion below.
Assigned Action Editor: ~Edward_Grefenstette1
Submission Number: 1944
Loading