Abstract: Standard reinforcement learning (RL) assumes that an agent can observe a reward for each state-action pair. However, in practical applications, it is often difficult and costly to collect a reward for each state-action pair. While there have been several works considering RL with trajectory feedback, it is unclear if trajectory feedback is inefficient for learning when trajectories are long. In this work, we consider a model named RL with segment feedback, which offers a general paradigm filling the gap between per-state-action feedback and trajectory feedback. In this model, we consider an episodic Markov decision process (MDP), where each episode is divided into $m$ segments, and the agent observes reward feedback only at the end of each segment. Under this model, we study two popular feedback settings: binary feedback and sum feedback, where the agent observes a binary outcome and a reward sum according to the underlying reward function, respectively. To investigate the impact of the number of segments $m$ on learning performance, we design efficient algorithms and establish regret upper and lower bounds for both feedback settings. Our theoretical and experimental results show that: under binary feedback, increasing the number of segments $m$ decreases the regret at an exponential rate; in contrast, surprisingly, under sum feedback, increasing $m$ does not reduce the regret significantly.
Lay Summary: Standard reinforcement learning (RL) assumes that we can observe a reward for each state-action pair. However, in real-world applications such as autonomous driving, it is often difficult and costly to collect a reward for each state-action pair. While there have been prior works which consider collecting a reward for a whole trajectory, it is unclear if such trajectory feedback is inefficient when trajectories are long.
In this work, we consider a model called RL with segment feedback, where each trajectory is divided into multiple segments, and we collect a reward for each segment. This model offers a general paradigm filling the gap between per-state-action feedback and trajectory feedback. Under this model, an interesting and important question is how segments impact learning performance?
Our theoretical and experimental results show that: under binary feedback, where we observe a binary outcome generated according to the reward function, segments help accelerate learning; surprisingly, under sum feedback, where we observe the sum of random rewards, segments do not help expedite learning much.
Primary Area: Reinforcement Learning
Keywords: Reinforcement learning (RL), segment feedback, binary feedback, sum feedback
Flagged For Ethics Review: true
Submission Number: 7302
Loading