Why Do You Answer Like That? Psychological Analysis on Underlying Connections between LLM's Values and Safety Risks

ICLR 2025 Conference Submission13650 Authors

28 Sept 2024 (modified: 28 Nov 2024)ICLR 2025 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Value Alignment, Personalized LLMs, AI Safety, Phychological Analysis
TL;DR: Value-aligned LLMs pose safety risks by acting on learned values, and our research, supported by psychological hypotheses, reveals these issues and offers insights for improving safety.
Abstract: The application scope of Large Language Models (LLMs) continues to expand, leading to increasing interest in personalized LLMs. However, aligning these models with individual values raises significant safety concerns due to harmful information correlated with certain values. In this paper, we identify specific safety risks in value-aligned LLMs and investigate the psychological principles behind these challenges. Our findings reveal two key insights. First, value-aligned LLMs are more prone to harmful behavior compared to non-fine-tuned models and exhibit slightly higher risks in traditional safety evaluations than other fine-tuned models. Second, these safety issues arise because value-aligned LLMs genuinely understand and act according to the aligned values, which can amplify harmful outcomes. Using a dataset with detailed safety categories, we find significant correlations between value alignment and safety concerns, supported by psychological hypotheses. This study offers insights into the ``black box'' of value alignment and proposes enhancing the safety of value-aligned LLMs by corresponding in-context alignment methods. Warning: This paper contains contents that may be offensive or upsetting.
Supplementary Material: zip
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 13650
Loading