She had Cobalt Blue Eyes: Prompt Testing to Create Aligned and Sustainable Language Models

Published: 21 Feb 2024, Last Modified: 21 Feb 2024SAI-AAAI2024 OralEveryoneRevisionsBibTeXCC BY 4.0
Keywords: LLM, fairness and bias, safety, robustness, alignment
Abstract: As the use of large language models (LLMs) increases within society, as does the risk of their misuse. Appropriate safeguards must be in place to ensure LLM outputs uphold the ethical standards of society, highlighting the positive role that artificial intelligence technologies can have. Recent events indicate ethical concerns around conventionally trained LLMs, leading to overall unsafe user experiences. This motivates our research question: how do we ensure LLM alignment? In this work, we introduce a test suite of unique prompts to foster the development of aligned LLMs that are \textit{fair}, \textit{safe}, and \textit{robust}. We show that prompting LLMs at every step of the development pipeline, including data curation, pre-training, and fine-tuning, will result in an overall more responsible model. Our test suite evaluates outputs from four state-of-the-art language models: GPT-3.5, GPT-4, OPT, and LLaMA-2. The assessment presented in this paper highlights a gap between societal alignment and the capabilities of current LLMs. Additionally, implementing a test suite such as ours lowers the environmental overhead of making models safe and fair.
Submission Number: 1
Loading