Why Do You Answer Like That? Psychological Analysis on Underlying Connections between LLM's Values and Safety Risks
The application scope of Large Language Models (LLMs) continues to expand, leading to increasing interest in personalized LLMs. However, aligning these models with individual values raises significant safety concerns due to harmful information correlated with certain values. In this paper, we identify specific safety risks in value-aligned LLMs and investigate the psychological principles behind these challenges. Our findings reveal two key insights. First, value-aligned LLMs are more prone to harmful behavior compared to non-fine-tuned models and exhibit slightly higher risks in traditional safety evaluations than other fine-tuned models. Second, these safety issues arise because value-aligned LLMs genuinely understand and act according to the aligned values, which can amplify harmful outcomes. Using a dataset with detailed safety categories, we find significant correlations between value alignment and safety concerns, supported by psychological hypotheses. This study offers insights into the ``black box'' of value alignment and proposes enhancing the safety of value-aligned LLMs by corresponding in-context alignment methods. Warning: This paper contains contents that may be offensive or upsetting.