Abstract: There has been extensive research on assessing the value orientation of Large Language Models (LLMs) as it can shape user experiences across demographic groups.
However, several challenges remain. First, while the Multiple Choice Question (MCQ) setting has been shown to be vulnerable to perturbations, there is no systematic comparison of probing methods for value probing.
Second, it is unclear to what extent the probed values capture in-context information and reflect models' preferences for real-world actions.
In this paper, we evaluate the robustness and expressiveness of value representations across three widely used probing strategies. We use variations in prompts and options, showing that all methods exhibit large variances under input perturbations. We also introduce two tasks studying whether the values are responsive to demographic context, and how well they align with the models' behaviors in value-related scenarios. We show that the demographic context has little effect on the free-text generation, and the models' values only weakly correlate with their preference for value-based actions. Our work highlights the need for a more careful examination of LLM value probing and awareness of its limitations.
Paper Type: Long
Research Area: Resources and Evaluation
Research Area Keywords: evaluation methodologies, reproducibility, automatic creation and evaluation of language resources
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Data resources
Languages Studied: English,
Submission Number: 5489
Loading