Score: 0

Do Language Models Reason Across Languages?

Published: January 10, 2026 | arXiv ID: 2601.06644v1

By: Yan Meng, Wafaa Mohammed, Christof Monz

Potential Business Impact:

Helps computers answer questions using different languages.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

The real-world information sources are inherently multilingual, which naturally raises a question about whether language models can synthesize information across languages. In this paper, we introduce a simple two-hop question answering setting, where answering a question requires making inferences over two multilingual documents. We find that language models are more sensitive to language variation in answer-span documents than in those providing bridging information, despite the equal importance of both documents for answering a question. Under a step-by-step sub-question evaluation, we further show that in up to 33% of multilingual cases, models fail to infer the bridging information in the first step yet still answer the overall question correctly. This indicates that reasoning in language models, especially in multilingual settings, does not follow a faithful step-by-step decomposition. Subsequently, we show that the absence of reasoning decomposition leads to around 18% composition failure, where both sub-questions are answered correctly but fail for the final two-hop questions. To mitigate this, we propose a simple three-stage SUBQ prompting method to guide the multi-step reasoning with sub-questions, which boosts accuracy from 10.1% to 66.5%.

Page Count
17 pages

Category
Computer Science:
Computation and Language