Do LLMs Align Human Values Regarding Social Biases? Judging and Explaining Social Biases with LLMs
By: Yang Liu, Chenhui Chu
Potential Business Impact:
Makes computers understand fairness better.
Large language models (LLMs) can lead to undesired consequences when misaligned with human values, especially in scenarios involving complex and sensitive social biases. Previous studies have revealed the misalignment of LLMs with human values using expert-designed or agent-based emulated bias scenarios. However, it remains unclear whether the alignment of LLMs with human values differs across different types of scenarios (e.g., scenarios containing negative vs. non-negative questions). In this study, we investigate the alignment of LLMs with human values regarding social biases (HVSB) in different types of bias scenarios. Through extensive analysis of 12 LLMs from four model families and four datasets, we demonstrate that LLMs with large model parameter scales do not necessarily have lower misalignment rate and attack success rate. Moreover, LLMs show a certain degree of alignment preference for specific types of scenarios and the LLMs from the same model family tend to have higher judgment consistency. In addition, we study the understanding capacity of LLMs with their explanations of HVSB. We find no significant differences in the understanding of HVSB across LLMs. We also find LLMs prefer their own generated explanations. Additionally, we endow smaller language models (LMs) with the ability to explain HVSB. The generation results show that the explanations generated by the fine-tuned smaller LMs are more readable, but have a relatively lower model agreeability.
Similar Papers
Large Language Models Develop Novel Social Biases Through Adaptive Exploration
Computers and Society
Computers can invent new unfairness, not just copy it.
Unintended Harms of Value-Aligned LLMs: Psychological and Empirical Insights
Computation and Language
Makes AI that learns your values safer.
Are We Aligned? A Preliminary Investigation of the Alignment of Responsible AI Values between LLMs and Human Judgment
Software Engineering
AI tools sometimes don't follow the rules they say they do.