Score: 0

From Reasoning to Code: GRPO Optimization for Underrepresented Languages

Published: May 20, 2025 | arXiv ID: 2506.11027v2

By: Federico Pennino , Bianca Raimondi , Massimo Rondelli and more

Potential Business Impact:

Teaches computers to write code for rare languages.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Generating accurate and executable code using large language models (LLMs) is challenging for languages with limited public training data compared to popular languages such as Python. This paper introduces a generalizable approach that uses small-scale code versions of the Qwen 2.5 model combined with Group Relative Policy Optimization (GRPO) to enable effective code generation through explicit reasoning steps, which is particularly beneficial for languages with smaller source code databases. Using Prolog as a representative use case -- given its limited online presence -- the initial model faced challenges in generating executable code. After some training steps, the model successfully produces logically consistent and syntactically accurate code by directly integrating reasoning-driven feedback into the reinforcement learning loop. Experimental evaluations using mathematical logic problem benchmarks illustrate significant improvements in reasoning quality, code accuracy, and logical correctness, underscoring the potential of this approach to benefit a wide range of programming languages lacking extensive training resources.

Country of Origin
🇮🇹 Italy

Page Count
8 pages

Category
Computer Science:
Machine Learning (CS)