Abstract: While Large Language Models (LLMs) have made significant strides in replicating human-like abilities, there are concerns about a reduction in the linguistic diversity of their outputs. This results in the homogenization of viewpoints and perspectives, as well as the underrepresentation of specific demographic groups. Although several fine-tuning and prompting techniques have been suggested to tackle the issue, they are often tailored to specific tasks or come with a substantial increase in computational cost and latency. This makes them challenging to apply to applications that demand very low latency, such as chatbots and virtual assistants. We propose Possibility Exploration Fine-Tuning (PEFT), a task-agnostic framework that enhances the text diversity of LLMs without increasing latency or computational cost. Given the same prompt, models fine-tuned with PEFT can simultaneously generate multiple diverse responses, each corresponding with a controllable possibility number. Experiments with Mistral 7B and LLAMA 2 on open-domain dialogue generation demonstrate that PEFT significantly enhances output diversity, as shown by a lower similarity between candidate responses. As PEFT focuses more on semantic diversity rather than lexical diversity, it can remarkably reduce demographic bias in dialogue systems.
Paper Type: Long
Research Area: Dialogue and Interactive Systems
Research Area Keywords: conversational modeling; bias/toxicity
Contribution Types: NLP engineering experiment
Languages Studied: English
Submission Number: 2913
Loading