Reinforcement Learning with Fine-grained Reward for Controllable Text Generation

20 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: controllable text generation, reinforcement learning
Abstract: To alleviate text degeneration of large-scale language models and meet the requirements of real-world applications, it is essential to make generation more controllable. Previous reinforcement learning (RL) research on language modeling generally learns from sentence-level feedback, which requires extensive exploration to collect enough trajectories, and more steps to learn contributory components from a noisy trajectory corpus. To tackle that, we propose a novel reinforcement learning algorithm with FIne-grained REward (FIRE). We derive an extensible fine-grained reward function and ease the trade-off between reward approximation and training stability. We present a theoretical connection between our approach and canonical policy-gradient RL methods. Experimental results show that FIRE can achieve superior controllability of language models with less computational overheads compared to prior RL approaches.
Primary Area: representation learning for computer vision, audio, language, and other modalities
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 2468
Loading