Keywords: Reinforcement Learning, Deep Learning
TL;DR: Theory predicts experience stitching but it’s rare in practice; we show Monte Carlo methods can enable stitching and may outperform TD in large-model settings.
Abstract: Reinforcement learning (RL) promises to solve long-horizon tasks even when training data contains only short fragments of the behaviors.
This *experience stitching* capability is often viewed as the purview of temporal difference (TD) methods. However, outside of small tabular settings, trajectories never intersect, calling into question this conventional wisdom.
Moreover, the common belief is that Monte Carlo (MC) methods should not be able to recombine experience, yet it remains unclear whether function approximation could result in a form of implicit stitching.
The goal of this paper is to empirically study whether the conventional wisdom about stitching actually holds in settings where function approximation is used.
We empirically demonstrate that Monte Carlo (MC) methods can also achieve experience stitching.
While TD methods do achieve slightly stronger capabilities than MC methods (in line with conventional wisdom), that gap is significantly smaller than the gap between small and large neural networks (even on quite simple tasks).
We find that increasing critic capacity effectively reduces the generalization gap for both the MC and TD methods.
These results suggest that the traditional TD inductive bias for stitching may be less necessary in the era of large models for RL and, in some cases, may offer diminishing returns.
Additionally, our results suggest that stitching, a form of generalization unique to the RL setting, might be achieved not through specialized algorithms (temporal difference learning) but rather through the same recipe that has provided generalization in other machine learning settings (via scale).
Primary Area: reinforcement learning
Submission Number: 21355
Loading