HumanAgencyBench: Scalable Evaluation of Human Agency Support in AI Assistants
By: Benjamin Sturgeon , Daniel Samuelson , Jacob Haimes and more
Potential Business Impact:
Tests AI to make sure it respects your choices.
As humans delegate more tasks and decisions to artificial intelligence (AI), we risk losing control of our individual and collective futures. Relatively simple algorithmic systems already steer human decision-making, such as social media feed algorithms that lead people to unintentionally and absent-mindedly scroll through engagement-optimized content. In this paper, we develop the idea of human agency by integrating philosophical and scientific theories of agency with AI-assisted evaluation methods: using large language models (LLMs) to simulate and validate user queries and to evaluate AI responses. We develop HumanAgencyBench (HAB), a scalable and adaptive benchmark with six dimensions of human agency based on typical AI use cases. HAB measures the tendency of an AI assistant or agent to Ask Clarifying Questions, Avoid Value Manipulation, Correct Misinformation, Defer Important Decisions, Encourage Learning, and Maintain Social Boundaries. We find low-to-moderate agency support in contemporary LLM-based assistants and substantial variation across system developers and dimensions. For example, while Anthropic LLMs most support human agency overall, they are the least supportive LLMs in terms of Avoid Value Manipulation. Agency support does not appear to consistently result from increasing LLM capabilities or instruction-following behavior (e.g., RLHF), and we encourage a shift towards more robust safety and alignment targets.
Similar Papers
AIssistant: An Agentic Approach for Human--AI Collaborative Scientific Work on Reviews and Perspectives in Machine Learning
Artificial Intelligence
Helps scientists write research papers faster.
A Call for Collaborative Intelligence: Why Human-Agent Systems Should Precede AI Autonomy
Artificial Intelligence
AI helps people do jobs better, not alone.
Towards autonomous normative multi-agent systems for Human-AI software engineering teams
Software Engineering
AI agents build and test computer programs faster.