Score: 0

The Reasoning Lingua Franca: A Double-Edged Sword for Multilingual AI

Published: October 23, 2025 | arXiv ID: 2510.20647v1

By: Alan Saji , Raj Dabre , Anoop Kunchukuttan and more

Potential Business Impact:

Computers understand math better in English.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large Reasoning Models (LRMs) achieve strong performance on mathematical, scientific, and other question-answering tasks, but their multilingual reasoning abilities remain underexplored. When presented with non-English questions, LRMs often default to reasoning in English, raising concerns about interpretability and the handling of linguistic and cultural nuances. We systematically compare an LRM's reasoning in English versus the language of the question. Our evaluation spans two tasks: MGSM and GPQA Diamond. Beyond measuring answer accuracy, we also analyze cognitive attributes in the reasoning traces. We find that English reasoning traces exhibit a substantially higher presence of these cognitive behaviors, and that reasoning in English generally yields higher final-answer accuracy, with the performance gap increasing as tasks become more complex. However, this English-centric strategy is susceptible to a key failure mode - getting "Lost in Translation," where translation steps lead to errors that would have been avoided by question's language reasoning.

Country of Origin
🇩🇰 Denmark

Page Count
14 pages

Category
Computer Science:
Computation and Language