BeaverTails: Towards Improved Safety Alignment of LLM via a Human-Preference Dataset

Published: 26 Sept 2023, Last Modified: 02 Nov 2023NeurIPS 2023 Datasets and Benchmarks PosterEveryoneRevisionsBibTeX
Keywords: Reinforcement Learning with Human Feedback (RLHF), Large Language Model, Safety Alignment, Human-Preference Data, Harmlessness-Helpfulness Tension
TL;DR: A RLHF dataset that decouples the dimensions of harmlessness and helpfulness in human preference
Abstract: In this paper, we introduce the BeaverTails dataset, aimed at fostering research on safety alignment in large language models (LLMs). This dataset uniquely separates annotations of helpfulness and harmlessness for question-answering pairs, thus offering distinct perspectives on these crucial attributes. In total, we have gathered safety meta-labels for 333,963 question-answer (QA) pairs and 361,903 pairs of expert comparison data for both the helpfulness and harmlessness metrics. We further showcase applications of BeaverTails in content moderation and reinforcement learning with human feedback (RLHF), emphasizing its potential for practical safety measures in LLMs. We believe this dataset provides vital resources for the community, contributing towards the safe development and deployment of LLMs. Our project page is available at the following URL: https://sites.google.com/view/pku-beavertails.
Supplementary Material: zip
Submission Number: 453
Loading