The Homogenizing Effect of Large Language Models on Human Expression and Thought
By: Zhivar Sourati, Alireza S. Ziabari, Morteza Dehghani
Potential Business Impact:
Warns AI flattens unique thinking styles
Cognitive diversity, reflected in variations of language, perspective, and reasoning, is essential to creativity and collective intelligence. This diversity is rich and grounded in culture, history, and individual experience. Yet as large language models (LLMs) become deeply embedded in people's lives, they risk standardizing language and reasoning. This Review synthesizes evidence across linguistics, cognitive, and computer science to show how LLMs reflect and reinforce dominant styles while marginalizing alternative voices and reasoning strategies. We examine how their design and widespread use contribute to this effect by mirroring patterns in their training data and amplifying convergence as all people increasingly rely on the same models across contexts. Unchecked, this homogenization risks flattening the cognitive landscapes that drive collective intelligence and adaptability.
Similar Papers
Large Language Models Develop Novel Social Biases Through Adaptive Exploration
Computers and Society
Computers can invent new unfairness, not just copy it.
A Meta-Analysis of the Persuasive Power of Large Language Models
Human-Computer Interaction
Computers persuade people as well as humans.
Unraveling the cognitive patterns of Large Language Models through module communities
Artificial Intelligence
Shows how computer brains learn like animal brains.