Score: 0

Bias Beneath the Tone: Empirical Characterisation of Tone Bias in LLM-Driven UX Systems

Published: December 23, 2025 | arXiv ID: 2512.19950v1

By: Heet Bodara, Md Masum Mushfiq, Isma Farah Siddiqui

Potential Business Impact:

AI assistants sound biased even when neutral.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large Language Models are increasingly used in conversational systems such as digital personal assistants, shaping how people interact with technology through language. While their responses often sound fluent and natural, they can also carry subtle tone biases such as sounding overly polite, cheerful, or cautious even when neutrality is expected. These tendencies can influence how users perceive trust, empathy, and fairness in dialogue. In this study, we explore tone bias as a hidden behavioral trait of large language models. The novelty of this research lies in the integration of controllable large language model based dialogue synthesis with tone classification models, enabling robust and ethical emotion recognition in personal assistant interactions. We created two synthetic dialogue datasets, one generated from neutral prompts and another explicitly guided to produce positive or negative tones. Surprisingly, even the neutral set showed consistent tonal skew, suggesting that bias may stem from the model's underlying conversational style. Using weak supervision through a pretrained DistilBERT model, we labeled tones and trained several classifiers to detect these patterns. Ensemble models achieved macro F1 scores up to 0.92, showing that tone bias is systematic, measurable, and relevant to designing fair and trustworthy conversational AI.

Country of Origin
🇦🇺 Australia

Page Count
4 pages

Category
Computer Science:
Computation and Language