Probing the Preferences of a Language Model: Integrating Verbal and Behavioral Tests of AI Welfare
By: Valen Tagliabue, Leonard Dung
Potential Business Impact:
Helps check if AI feels good or bad.
We develop new experimental paradigms for measuring welfare in language models. We compare verbal reports of models about their preferences with preferences expressed through behavior when navigating a virtual environment and selecting conversation topics. We also test how costs and rewards affect behavior and whether responses to an eudaimonic welfare scale - measuring states such as autonomy and purpose in life - are consistent across semantically equivalent prompts. Overall, we observed a notable degree of mutual support between our measures. The reliable correlations observed between stated preferences and behavior across conditions suggest that preference satisfaction can, in principle, serve as an empirically measurable welfare proxy in some of today's AI systems. Furthermore, our design offered an illuminating setting for qualitative observation of model behavior. Yet, the consistency between measures was more pronounced in some models and conditions than others and responses were not consistent across perturbations. Due to this, and the background uncertainty about the nature of welfare and the cognitive states (and welfare subjecthood) of language models, we are currently uncertain whether our methods successfully measure the welfare state of language models. Nevertheless, these findings highlight the feasibility of welfare measurement in language models, inviting further exploration.
Similar Papers
Beyond Mimicry: Preference Coherence in LLMs
Artificial Intelligence
AI doesn't always make smart choices when faced with tough decisions.
When Should Neural Data Inform Welfare? A Critical Framework for Policy Uses of Neuroeconomics
Machine Learning (CS)
Helps decide if brain data truly shows what's good.
Mutual Wanting in Human--AI Interaction: Empirical Evidence from Large-Scale Analysis of GPT Model Transitions
Computers and Society
AI learns what users want to build better AI.