Debiasing large language models for persona-based dialogue systemsDownload PDF

Anonymous

16 Oct 2023ACL ARR 2023 October Blind SubmissionReaders: Everyone
Abstract: Persona-based chatbots are conversational AI systems designed to emulate the behaviour and characteristics of specific personas, whether from real life or fiction. Previous research has mainly concentrated on aligning chatbot responses with predefined personas. However, manually creating these personas can be time-consuming and may not fully capture all aspects of an individual's personality. This study introduces a new task: persona generation, aiming to generate diverse and high-quality personas before or during conversations. Inspired by the success of large language models, we use ChatGPT to accomplish the task and observe that the model has a strong sampling bias towards generating personas resembling a specific demographic group. To increase persona diversity, we propose two strategies: (1) Chain-of-decision prompting and (2) Listing sampling. Experimental results show that our approaches significantly outperform temperature sampling and logit suppression in terms of diversity. As our method is task-agnostic and does not necessitate additional training, it can be applied to various tasks that are susceptible to bias from large language models.
Paper Type: long
Research Area: Ethics, Bias, and Fairness
Contribution Types: Model analysis & interpretability, NLP engineering experiment
Languages Studied: English
Consent To Share Submission Details: On behalf of all authors, we agree to the terms above to share our submission details.
0 Replies

Loading