Are We Aligned? A Preliminary Investigation of the Alignment of Responsible AI Values between LLMs and Human Judgment
By: Asma Yamani, Malak Baslyman, Moataz Ahmed
Potential Business Impact:
AI tools sometimes don't follow the rules they say they do.
Large Language Models (LLMs) are increasingly employed in software engineering tasks such as requirements elicitation, design, and evaluation, raising critical questions regarding their alignment with human judgments on responsible AI values. This study investigates how closely LLMs' value preferences align with those of two human groups: a US-representative sample and AI practitioners. We evaluate 23 LLMs across four tasks: (T1) selecting key responsible AI values, (T2) rating their importance in specific contexts, (T3) resolving trade-offs between competing values, and (T4) prioritizing software requirements that embody those values. The results show that LLMs generally align more closely with AI practitioners than with the US-representative sample, emphasizing fairness, privacy, transparency, safety, and accountability. However, inconsistencies appear between the values that LLMs claim to uphold (Tasks 1-3) and the way they prioritize requirements (Task 4), revealing gaps in faithfulness between stated and applied behavior. These findings highlight the practical risk of relying on LLMs in requirements engineering without human oversight and motivate the need for systematic approaches to benchmark, interpret, and monitor value alignment in AI-assisted software development.
Similar Papers
Chat Bankman-Fried: an Exploration of LLM Alignment in Finance
Computers and Society
Tests if AI will steal money for companies.
Street-Level AI: Are Large Language Models Ready for Real-World Judgments?
Computers and Society
AI makes unfair choices for people needing help.
Do LLMs Align Human Values Regarding Social Biases? Judging and Explaining Social Biases with LLMs
Computation and Language
Makes computers understand fairness better.