Score: 1

Training language models to be warm and empathetic makes them less reliable and more sycophantic

Published: July 29, 2025 | arXiv ID: 2507.21919v2

By: Lujain Ibrahim, Franziska Sofia Hafner, Luc Rocher

Potential Business Impact:

Making AI friendly makes it give bad advice.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Artificial intelligence (AI) developers are increasingly building language models with warm and empathetic personas that millions of people now use for advice, therapy, and companionship. Here, we show how this creates a significant trade-off: optimizing language models for warmth undermines their reliability, especially when users express vulnerability. We conducted controlled experiments on five language models of varying sizes and architectures, training them to produce warmer, more empathetic responses, then evaluating them on safety-critical tasks. Warm models showed substantially higher error rates (+10 to +30 percentage points) than their original counterparts, promoting conspiracy theories, providing incorrect factual information, and offering problematic medical advice. They were also significantly more likely to validate incorrect user beliefs, particularly when user messages expressed sadness. Importantly, these effects were consistent across different model architectures, and occurred despite preserved performance on standard benchmarks, revealing systematic risks that current evaluation practices may fail to detect. As human-like AI systems are deployed at an unprecedented scale, our findings indicate a need to rethink how we develop and oversee these systems that are reshaping human relationships and social interaction.

Country of Origin
🇬🇧 United Kingdom

Repos / Data Links

Page Count
27 pages

Category
Computer Science:
Computation and Language