Score: 0

Human Preferences for Constructive Interactions in Language Model Alignment

Published: March 5, 2025 | arXiv ID: 2503.16480v1

By: Yara Kyrychenko , Jon Roozenbeek , Brandon Davidson and more

Potential Business Impact:

Teaches AI to talk nicely to everyone.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

As large language models (LLMs) enter the mainstream, aligning them to foster constructive dialogue rather than exacerbate societal divisions is critical. Using an individualized and multicultural alignment dataset of over 7,500 conversations of individuals from 74 countries engaging with 21 LLMs, we examined how linguistic attributes linked to constructive interactions are reflected in human preference data used for training AI. We found that users consistently preferred well-reasoned and nuanced responses while rejecting those high in personal storytelling. However, users who believed that AI should reflect their values tended to place less preference on reasoning in LLM responses and more on curiosity. Encouragingly, we observed that users could set the tone for how constructive their conversation would be, as LLMs mirrored linguistic attributes, including toxicity, in user queries.

Country of Origin
🇬🇧 United Kingdom

Page Count
11 pages

Category
Computer Science:
Human-Computer Interaction