Score: 1

Improving LLM-Generated Code Quality with GRPO

Published: June 2, 2025 | arXiv ID: 2506.02211v1

By: Maxime Robeyns, Laurence Aitchison

Potential Business Impact:

Makes computer code better and safer to use.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large Language Models (LLMs) are gaining widespread use for code generation. Recent training procedures use execution feedback as a reward signal, typically focusing on the functional correctness of the code, using unit test pass rate as a reward signal. However, this reward signal fails to capture notions of maintainability, quality and safety of the code produced. We address this under-explored area and develop a comprehensive library to quantify various aspects of code quality, and use it as a reward in GRPO. We find GRPO increases code quality according to this measure, which is confirmed by expert, blinded human annotators.

Country of Origin
🇬🇧 United Kingdom

Repos / Data Links

Page Count
15 pages

Category
Computer Science:
Artificial Intelligence