Teaching Values to Machines: Simulating Human-Like Behavior in LLMs with Value-Prompting

ICLR 2026 Conference Submission18942 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: human behavior analysis, NLP tools for social analysis
TL;DR: Prompting LLMs with specific human values makes them behave consistently with those values, mimicking human psychological patterns and allowing for simulated societal-level experiments.
Abstract: Large Language Models (LLMs) demonstrate a remarkable capacity to adopt different personas and roles. Yet, it remains unclear whether they are able to manifest a behavior that adheres to a coherent set of values. In this paper, we introduce value-prompting, a novel prompting technique that draws upon established psychological theories of human values. Using a comprehensive behavioral test, we demonstrate that value-prompting systematically induces value-coherent behaviors in LLMs. We then administer a set of psychological questionnaires to the value-prompted LLMs, covering aspects such as pro-sociality, personality traits, and everyday behaviors. We also examine different approaches to simulate the value composition for an entire population. Our results show that value-prompted LLMs embody value structures and value-behavior relationships that align with human population studies. These findings showcase the potential of value-prompting as a psychologically driven tool to manipulate LLM behavior.
Primary Area: applications to neuroscience & cognitive science
Submission Number: 18942
Loading