Best-of-L: Cross-Lingual Reward Modeling for Mathematical Reasoning
By: Sara Rajaee , Rochelle Choenni , Ekaterina Shutova and more
Potential Business Impact:
Makes computers better at math in any language.
While the reasoning abilities of large language models (LLMs) continue to advance, it remains unclear how such ability varies across languages in multilingual LLMs and whether different languages produce reasoning paths that complement each other. To investigate this question, we train a reward model to rank generated responses for a given question across languages. Our results show that our cross-lingual reward model substantially improves mathematical reasoning performance compared to using reward modeling within a single language, benefiting even high-resource languages. While English often exhibits the highest performance in multilingual models, we find that cross-lingual sampling particularly benefits English under low sampling budgets. Our findings reveal new opportunities to improve multilingual reasoning by leveraging the complementary strengths of diverse languages.
Similar Papers
Could Thinking Multilingually Empower LLM Reasoning?
Computation and Language
Using many languages helps AI solve problems better.
The Reasoning Lingua Franca: A Double-Edged Sword for Multilingual AI
Computation and Language
Computers understand math better in English.
A Survey on Large Language Models for Mathematical Reasoning
Artificial Intelligence
Helps computers solve math problems like a person.