Score: 1

Beg to Differ: Understanding Reasoning-Answer Misalignment Across Languages

Published: December 27, 2025 | arXiv ID: 2512.22712v1

By: Anaelia Ovalle , Candace Ross , Sebastian Ruder and more

BigTech Affiliations: Meta

Potential Business Impact:

Tests if AI reasons the same in all languages.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large language models demonstrate strong reasoning capabilities through chain-of-thought prompting, but whether this reasoning quality transfers across languages remains underexplored. We introduce a human-validated framework to evaluate whether model-generated reasoning traces logically support their conclusions across languages. Analyzing 65k reasoning traces from GlobalMMLU questions across 6 languages and 6 frontier models, we uncover a critical blind spot: while models achieve high task accuracy, their reasoning can fail to support their conclusions. Reasoning traces in non-Latin scripts show at least twice as much misalignment between their reasoning and conclusions than those in Latin scripts. We develop an error taxonomy through human annotation to characterize these failures, finding they stem primarily from evidential errors (unsupported claims, ambiguous facts) followed by illogical reasoning steps. Our findings demonstrate that current multilingual evaluation practices provide an incomplete picture of model reasoning capabilities and highlight the need for reasoning-aware evaluation frameworks.

Country of Origin
🇺🇸 United States

Page Count
15 pages

Category
Computer Science:
Computation and Language