Beg to Differ: Understanding Reasoning-Answer Misalignment Across Languages
By: Anaelia Ovalle , Candace Ross , Sebastian Ruder and more
Potential Business Impact:
Tests if AI reasons the same in all languages.
Large language models demonstrate strong reasoning capabilities through chain-of-thought prompting, but whether this reasoning quality transfers across languages remains underexplored. We introduce a human-validated framework to evaluate whether model-generated reasoning traces logically support their conclusions across languages. Analyzing 65k reasoning traces from GlobalMMLU questions across 6 languages and 6 frontier models, we uncover a critical blind spot: while models achieve high task accuracy, their reasoning can fail to support their conclusions. Reasoning traces in non-Latin scripts show at least twice as much misalignment between their reasoning and conclusions than those in Latin scripts. We develop an error taxonomy through human annotation to characterize these failures, finding they stem primarily from evidential errors (unsupported claims, ambiguous facts) followed by illogical reasoning steps. Our findings demonstrate that current multilingual evaluation practices provide an incomplete picture of model reasoning capabilities and highlight the need for reasoning-aware evaluation frameworks.
Similar Papers
When Models Reason in Your Language: Controlling Thinking Trace Language Comes at the Cost of Accuracy
Computation and Language
Computers can now explain their thinking in any language.
Why Do Multilingual Reasoning Gaps Emerge in Reasoning Language Models?
Computation and Language
Helps computers reason better in all languages.
The Reasoning Lingua Franca: A Double-Edged Sword for Multilingual AI
Computation and Language
Computers understand math better in English.