Abstract: Recent studies have demonstrated that large language models (LLMs) have ethical-related problems such as social biases, lack of moral reasoning, and generation of offensive content.The existing evaluation metrics and methods to address these ethical challenges use datasets intentionally created by instructing humans to create instances including ethical problems.Therefore, the data does not reflect prompts that users actually provide when utilizing LLM services in everyday contexts.This may not lead to the development of safe LLMs that can address ethical challenges arising in real-world applications.In this paper, we create \textbf{Eagle}\footnote{An anonymised copy of the Eagle dataset is uploaded to ARR, and will be made public upon paper acceptance.} datasets extracted from real interactions between ChatGPT and users that exhibit social biases, toxicity, and immoral problems.Our experiments show that Eagle captures complementary aspects, not covered by existing datasets proposed for evaluation and mitigation of such ethical challenges.
Paper Type: long
Research Area: Ethics, Bias, and Fairness
Contribution Types: Data resources
Languages Studied: English
0 Replies
Loading