Seeking Human Security Consensus: A Unified Value Scale for Generative AI Value Safety

ACL ARR 2026 January Submission3286 Authors

04 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Generative AI, Value Safety, Ethics and Alignment, Safety Benchmark
Abstract: The rapid development of generative AI has brought value- and ethics-related risks to the forefront, making value safety a critical concern while a unified consensus remains lacking. In this work, we propose an internationally inclusive and resilient unified value framework, the GenAI Value Safety Scale (GVS-Scale): Grounded in a lifecycle-oriented perspective, we develop a taxonomy of GenAI value safety risks and construct the GenAI Value Safety Incident Repository (GVSIR), and further derive the GVS-Scale through grounded theory and operationalize it via the GenAI Value Safety Benchmark (GVS-Bench). Experiments on mainstream text generation models reveal substantial variation in value safety performance across models and value categories, indicating uneven and fragmented value alignment in current systems. Our findings highlight the importance of establishing shared safety foundations through dialogue and advancing technical safety mechanisms beyond reactive constraints toward more flexible approaches. Data and evaluation guidelines are available at https://github.com/acl2026/GVS-Bench.
Paper Type: Long
Research Area: Safety and Alignment in LLMs
Research Area Keywords: value safety,LLM alignment,generative AI safety,safety benchmark,ethical alignment,harm prevention
Contribution Types: NLP engineering experiment, Data resources, Data analysis
Languages Studied: english,chinese
Submission Number: 3286
Loading