Abstract: As Large Language Models (LLMs) demonstrate increasingly strong human-like capabilities, the need to align them with human values has become significant.
Recent advanced techniques, such as prompt learning and reinforcement learning, are being employed to bring LLMs closer to aligning with human values.
While these techniques address broad ethical and helpfulness concerns, they rarely consider simulating individualized human values.
To bridge this gap, we propose SimVBG, a framework that simulates individual values based on individual backstories that reflect their past experience and demographic information.
SimVBG transforms structured data on an individual to a backstory and utilizes a multi‐module architecture inspired by the Cognitive–Affective Personality System to simulate individual value based on the backstories.
We test SimVBG on a self-construct benchmark derived from the World Values Survey and show that SimVBG improves top-1 accuracy by more than 10\% over the retrieval-augmented generation method.
Further analysis shows that performance increases as additional interaction user history becomes available, indicating that the model can refine its persona over time. Code, dataset, and complete experimental results are anonymously available at
https://anonymous.4open.science/r/SimVBG-029C.
Paper Type: Long
Research Area: Human-Centered NLP
Research Area Keywords: Human-Centered NLP, Computational Social Science and Cultural Analytics, Language Modeling
Contribution Types: NLP engineering experiment
Languages Studied: English
Keywords: Human-Centered NLP, Computational Social Science and Cultural Analytics, Language Modeling
Submission Number: 6469
Loading