Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback

Published: 30 Dec 2023, Last Modified: 30 Dec 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-layered approach to the development of safer AI systems.
Certifications: Survey Certification
Submission Length: Long submission (more than 12 pages of main content)
Changes Since Last Submission: Camera ready
Assigned Action Editor: ~Marcello_Restelli1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 1557
Loading