When Models Reason in Your Language: Controlling Thinking Trace Language Comes at the Cost of Accuracy
By: Jirui Qi , Shan Chen , Zidi Xiong and more
Potential Business Impact:
Computers can now explain their thinking in any language.
Recent Large Reasoning Models (LRMs) with thinking traces have shown strong performance on English reasoning tasks. However, their ability to think in other languages is less studied. This capability is as important as answer accuracy for real world applications because users may find the reasoning trace useful for oversight only when it is expressed in their own language. We comprehensively evaluate two leading families of LRMs on our XReasoning benchmark and find that even the most advanced models often revert to English or produce fragmented reasoning in other languages, revealing a substantial gap in multilingual reasoning. Prompt based interventions that force models to reason in the users language improve readability and oversight but reduce answer accuracy, exposing an important trade off. We further show that targeted post training on just 100 examples mitigates this mismatch, though some accuracy loss remains. Our results highlight the limited multilingual reasoning capabilities of current LRMs and outline directions for future work. Code and data are available at https://github.com/Betswish/mCoT-XReasoning.
Similar Papers
Beg to Differ: Understanding Reasoning-Answer Misalignment Across Languages
Computation and Language
Tests if AI reasons the same in all languages.
A Comprehensive Evaluation of Multilingual Chain-of-Thought Reasoning: Performance, Consistency, and Faithfulness Across Languages
Computation and Language
Helps computers think better in different languages.
Language Matters: How Do Multilingual Input and Reasoning Paths Affect Large Reasoning Models?
Computation and Language
Computers think in English, not your language.