Score: 1

Alignment with Fill-In-the-Middle for Enhancing Code Generation

Published: August 27, 2025 | arXiv ID: 2508.19532v1

By: Houxing Ren , Zimu Lu , Weikang Shi and more

Potential Business Impact:

Makes computers write better computer code.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

The code generation capabilities of Large Language Models (LLMs) have advanced applications like tool invocation and problem-solving. However, improving performance in code-related tasks remains challenging due to limited training data that is verifiable with accurate test cases. While Direct Preference Optimization (DPO) has shown promise, existing methods for generating test cases still face limitations. In this paper, we propose a novel approach that splits code snippets into smaller, granular blocks, creating more diverse DPO pairs from the same test cases. Additionally, we introduce the Abstract Syntax Tree (AST) splitting and curriculum training method to enhance the DPO training. Our approach demonstrates significant improvements in code generation tasks, as validated by experiments on benchmark datasets such as HumanEval (+), MBPP (+), APPS, LiveCodeBench, and BigCodeBench. Code and data are available at https://github.com/SenseLLM/StructureCoder.

Repos / Data Links

Page Count
17 pages

Category
Computer Science:
Computation and Language