Leveraging Unlabeled Data Sharing through Kernel Function Approximation in Offline Reinforcement Learning
Abstract: Offline reinforcement learning (RL) learns policies from a fixed dataset, but often requires large amounts of data. The challenge arises when labeled datasets are expensive, especially when rewards have to be provided by human labelers for large datasets. In contrast, unlabelled data tends to be less expensive. This situation highlights the importance of finding effective ways to use unlabelled data in offline RL, especially when labelled data is limited or expensive to obtain. In this paper, we present the algorithm to utilize the unlabeled data in the offline RL method with kernel function approximation and give the theoretical guarantee. We present various eigenvalue decay conditions of $\mathcal{H}_k$ which determine the complexity of the algorithm. In summary, our work provides a promising approach for exploiting the advantages offered by unlabeled data in offline RL, whilst maintaining theoretical assurances.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: We have carefully revised our manuscript in response to the reviewer’s comments. The key modifications are as follows:
1. Clarification on Finite-Horizon MDPs vs. Discounted MDPs
- Previously, we emphasized our focus on finite-horizon MDPs without explicitly distinguishing their challenges from the discounted infinite-horizon setting.
- In this revision, we clarify that the finite-horizon setting introduces horizon-dependent reward and transition functions, making it distinct from the discounted case.
- We also highlight that, unlike prior works (e.g., Hu et al., 2023), which require strong uniform data coverage assumptions (e.g., finite concentrability coefficients), our framework does not rely on such assumptions, broadening its applicability.
2. Refinement of Feature Coverage Assumptions
- Previously, we compared our coverage assessment to the bounded concentrability coefficient used in prior work without explicitly addressing the nature of our assumption.
- In the revised version, we clarify that our approach relies on a global coverage assumption based on the spectrum of feature covariance matrices (Assumption 4.7).
- We explicitly state that this assumption ensures sufficient data coverage across all policies in the considered class, making it stronger than single-policy coverage but necessary for handling non-stationary settings.
These refinements improve the clarity and precision of our contributions, addressing the reviewer’s concerns while strengthening our theoretical framework.
Code: https://github.com/d09942015ntu/leveraging_unlabeled_offline_rl
Assigned Action Editor: ~Nishant_A_Mehta1
Submission Number: 2977
Loading