Hummer: Towards Limited Competitive Preference Dataset

Published: 10 Jul 2024, Last Modified: 26 Aug 2024COLMEveryoneRevisionsBibTeXCC BY 4.0
Research Area: Alignment, Data, Safety
Keywords: reinforcement learning from human feedback, RLHF, preference dataset, reward models, alignment
TL;DR: We introduce \texttt{Hummer} and its fine-grained variant \texttt{Hummer-F}: innovative datasets models to mitigate alignment conflicts, improving task adaptation and security against jailbreak attacks.
Abstract: Preference datasets are essential for incorporating human preferences into pre-trained language models, playing a key role in the success of Reinforcement Learning from Human Feedback. However, these datasets often demonstrate conflicting alignment objectives, leading to increased vulnerability to jailbreak attacks and challenges in adapting downstream tasks to prioritize specific alignment objectives without negatively impacting others. In this work, we introduce a novel statistical metric, Alignment Dimension Conflict, to quantify the degree of conflict within preference datasets. We then present \texttt{Hummer} and its fine-grained variant, \texttt{Hummer-F}, as innovative pairwise preference datasets with reduced-conflict alignment objectives. \texttt{Hummer} is built based on UltraFeedback and is enhanced by AI feedback from GPT-4, marking as the first preference dataset aimed at reducing the competition between alignment objectives. Furthermore, we develop reward models, \texttt{HummerRM} and \texttt{HummerRM-F}, which employ a hybrid sampling approach to balance diverse alignment objectives effectively. This sampling method positions \texttt{HummerRM} as an ideal model for domain-specific further fine-tuning and reducing vulnerability to jailbreak attacks.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
Author Guide: I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
Submission Number: 587
Loading