Score: 1

Best-of-L: Cross-Lingual Reward Modeling for Mathematical Reasoning

Published: September 19, 2025 | arXiv ID: 2509.15811v1

By: Sara Rajaee , Rochelle Choenni , Ekaterina Shutova and more

Potential Business Impact:

Makes computers better at math in any language.

Business Areas:
Language Learning Education

While the reasoning abilities of large language models (LLMs) continue to advance, it remains unclear how such ability varies across languages in multilingual LLMs and whether different languages produce reasoning paths that complement each other. To investigate this question, we train a reward model to rank generated responses for a given question across languages. Our results show that our cross-lingual reward model substantially improves mathematical reasoning performance compared to using reward modeling within a single language, benefiting even high-resource languages. While English often exhibits the highest performance in multilingual models, we find that cross-lingual sampling particularly benefits English under low sampling budgets. Our findings reveal new opportunities to improve multilingual reasoning by leveraging the complementary strengths of diverse languages.

Country of Origin
🇳🇱 Netherlands

Repos / Data Links

Page Count
8 pages

Category
Computer Science:
Computation and Language