Seeking Human Security Consensus: A Unified Value Scale for Generative AI Value Safety
By: Ying He , Baiyang Li , Yule Cao and more
Potential Business Impact:
Makes AI safer and more fair for everyone.
The rapid development of generative AI has brought value- and ethics-related risks to the forefront, making value safety a critical concern while a unified consensus remains lacking. In this work, we propose an internationally inclusive and resilient unified value framework, the GenAI Value Safety Scale (GVS-Scale): Grounded in a lifecycle-oriented perspective, we develop a taxonomy of GenAI value safety risks and construct the GenAI Value Safety Incident Repository (GVSIR), and further derive the GVS-Scale through grounded theory and operationalize it via the GenAI Value Safety Benchmark (GVS-Bench). Experiments on mainstream text generation models reveal substantial variation in value safety performance across models and value categories, indicating uneven and fragmented value alignment in current systems. Our findings highlight the importance of establishing shared safety foundations through dialogue and advancing technical safety mechanisms beyond reactive constraints toward more flexible approaches. Data and evaluation guidelines are available at https://github.com/acl2026/GVS-Bench. This paper includes examples that may be offensive or harmful.
Similar Papers
A Study on the Framework for Evaluating the Ethics and Trustworthiness of Generative AI
Computers and Society
Makes AI safer and more trustworthy for everyone.
Toward an Evaluation Science for Generative AI Systems
Artificial Intelligence
Tests AI to make sure it's safe and works.
Understanding and Mitigating Risks of Generative AI in Financial Services
Computation and Language
Keeps AI from giving bad money advice.