Foundational Challenges in Assuring Alignment and Safety of Large Language Models

Published: 01 Jan 2024, Last Modified: 15 May 2025CoRR 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: This work identifies 18 foundational challenges in assuring the alignment and safety of large language models (LLMs). These challenges are organized into three different categories: scientific understanding of LLMs, development and deployment methods, and sociotechnical challenges. Based on the identified challenges, we pose $200+$ concrete research questions.
Loading