Score: 0

Language of Thought Shapes Output Diversity in Large Language Models

Published: January 16, 2026 | arXiv ID: 2601.11227v1

By: Shaoyang Xu, Wenxuan Zhang

Potential Business Impact:

Makes AI think in different languages for new ideas.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Output diversity is crucial for Large Language Models as it underpins pluralism and creativity. In this work, we reveal that controlling the language used during model thinking-the language of thought-provides a novel and structural source of output diversity. Our preliminary study shows that different thinking languages occupy distinct regions in a model's thinking space. Based on this observation, we study two repeated sampling strategies under multilingual thinking-Single-Language Sampling and Mixed-Language Sampling-and conduct diversity evaluation on outputs that are controlled to be in English, regardless of the thinking language used. Across extensive experiments, we demonstrate that switching the thinking language from English to non-English languages consistently increases output diversity, with a clear and consistent positive correlation such that languages farther from English in the thinking space yield larger gains. We further show that aggregating samples across multiple thinking languages yields additional improvements through compositional effects, and that scaling sampling with linguistic heterogeneity expands the model's diversity ceiling. Finally, we show that these findings translate into practical benefits in pluralistic alignment scenarios, leading to broader coverage of cultural knowledge and value orientations in LLM outputs. Our code is publicly available at https://github.com/iNLP-Lab/Multilingual-LoT-Diversity.

Country of Origin
πŸ‡ΈπŸ‡¬ Singapore

Page Count
14 pages

Category
Computer Science:
Computation and Language