Alignment is not sufficient to prevent large language models from generating harmful information: A psychoanalytic perspective

ICLR 2024 Workshop AGI Submission2 Authors

31 Jan 2024 (modified: 12 May 2024)Submitted to ICLR 2024 AGI WorkshopEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Large Language Model, Adversarial attack, Alignment, Psychoanalysis theory, Alignment, Ethics
TL;DR: This paper provides a psychoanalytic perspective on why alignment strategies alone are insufficient to prevent large language models from generating harmful information, emphasizing the need for more comprehensive approaches to address this issue.
Abstract: Large Language Models (LLMs) are central to a multitude of applications but struggle with significant risks, notably in generating harmful content and biases. Drawing an analogy to the human psyche’s conflict between evolutionary survival instincts and societal norm adherence elucidated in Freud’s psychoanalysis theory, we argue that LLMs suffer a similar fundamental conflict, arising between their inherent desire for syntactic and semantic continuity, established during the pre-training phase, and the post-training alignment with human values. This conflict renders LLMs vulnerable to adversarial attacks, wherein intensifying the models’ desire for continuity can circumvent alignment efforts, resulting in the generation of harmful information. Through a series of experiments, we first validated the existence of the desire for continuity in LLMs, and further devised a straightforward yet powerful technique, such as incomplete sentences, negative priming, and cognitive dissonance scenarios, to demonstrate that even advanced LLMs struggle to prevent the generation of harmful information. In summary, our study uncovers the root of LLMs’ vulnerabilities to adversarial attacks, hereby questioning the efficacy of solely relying on sophisticated alignment methods, and further advocates for a new training idea that integrates modal concepts alongside traditional amodal concepts, aiming to endow LLMs with a more nuanced understanding of real-world contexts and ethical considerations.
Submission Number: 2
Loading