Abstract: Recent advances in Reinforcement Learning from Human Feedback (RLHF) have shown that KL-regularization plays a pivotal role in improving the efficiency of RL fine-tuning for large language models (LLMs). Despite its empirical advantage, the theoretical difference between KL-regularized RL and standard RL remains largely under-explored. While there is a recent line of work on the theoretical analysis of KL-regularized objective in decision making (Xiong et al., 2024a; Xie et al., 2024; Zhao et al., 2024), these analyses either reduce to the traditional RL setting or rely on strong coverage assumptions. In this paper, we propose an optimism-based KL-regularized online contextual bandit algorithm, and provide a novel analysis of its regret. By carefully leveraging the benign optimization landscape induced by the KL-regularization and the optimistic reward estimation, our algorithm achieves an $\mathcal{O}\big(\eta\log (N_{\mathcal R} T)\cdot d_{\mathcal R}\big)$ logarithmic regret bound, where $\eta, N_{\mathcal R},T,d_{\mathcal R}$ denote the KL-regularization parameter, the cardinality of the reward function class, number of rounds, and the complexity of the reward function class. Furthermore, we extend our algorithm and analysis to reinforcement learning by developing a novel decomposition over transition steps and also obtain a similar logarithmic regret bound.
Lay Summary: How can AI agents learn quickly while minimizing dangerous mistakes?
The solution lies in a technique called KL regularization, inspired by human learning. Just as humans balance trying new strategies with familiar "safe" approaches, the algorithm gently discourages the AI from straying too far from proven strategies. By combining this with "optimism"—prioritizing promising new actions—the AI explores more efficiently.
The breakthrough: The algorithm achieves logarithmic regret, meaning its performance gap compared to the best possible strategy grows extremely slowly.
Why does this matter?
- Safer AI: Prevents drastic failures during learning—critical for robotics or medical AI.
- Efficient adaptation: Enables rapid fine-tuning of large language models (LLMs) using human feedback (RLHF) without performance collapse.
- Theoretical foundation: Resolves a long-standing gap between empirical success and theoretical understanding of KL regularization.
Impact: Enables more reliable AI systems that learn faster with lower costs—key for real-world deployment where mistakes have consequences.
Primary Area: Theory->Reinforcement Learning and Planning
Keywords: Reinforcement learning, regularization
Submission Number: 14654
Loading