Human Preferences for Constructive Interactions in Language Model Alignment
By: Yara Kyrychenko , Jon Roozenbeek , Brandon Davidson and more
Potential Business Impact:
Teaches AI to talk nicely to everyone.
As large language models (LLMs) enter the mainstream, aligning them to foster constructive dialogue rather than exacerbate societal divisions is critical. Using an individualized and multicultural alignment dataset of over 7,500 conversations of individuals from 74 countries engaging with 21 LLMs, we examined how linguistic attributes linked to constructive interactions are reflected in human preference data used for training AI. We found that users consistently preferred well-reasoned and nuanced responses while rejecting those high in personal storytelling. However, users who believed that AI should reflect their values tended to place less preference on reasoning in LLM responses and more on curiosity. Encouragingly, we observed that users could set the tone for how constructive their conversation would be, as LLMs mirrored linguistic attributes, including toxicity, in user queries.
Similar Papers
Operationalizing Pluralistic Values in Large Language Model Alignment Reveals Trade-offs in Safety, Inclusivity, and Model Behavior
Artificial Intelligence
Makes AI understand different people better.
Evaluating Behavioral Alignment in Conflict Dialogue: A Multi-Dimensional Comparison of LLM Agents and Humans
Computation and Language
AI learns to argue and negotiate like people.
Investigating Co-Constructive Behavior of Large Language Models in Explanation Dialogues
Computation and Language
Helps computers teach you better by asking questions.