Learning to Align Human Code Preferences
By: Xin Yin , Chao Ni , Liushan Chen and more
Potential Business Impact:
Teaches computers to write better code.
Large Language Models (LLMs) have demonstrated remarkable potential in automating software development tasks. While recent advances leverage Supervised Fine-Tuning (SFT) and Direct Preference Optimization (DPO) to align models with human preferences, the optimal training strategy remains unclear across diverse code preference scenarios. This paper systematically investigates the roles of SFT and DPO in aligning LLMs with different code preferences. Through both theoretical analysis and empirical observation, we hypothesize that SFT excels in scenarios with objectively verifiable optimal solutions, while applying SFT followed by DPO (S&D) enables models to explore superior solutions in scenarios without objectively verifiable optimal solutions. Based on the analysis and experimental evidence, we propose Adaptive Preference Optimization (APO), a dynamic integration approach that adaptively amplifies preferred responses, suppresses dispreferred ones, and encourages exploration of potentially superior solutions during training. Extensive experiments across six representative code preference tasks validate our theoretical hypotheses and demonstrate that APO consistently matches or surpasses the performance of existing SFT and S&D strategies. Our work provides both theoretical foundations and practical guidance for selecting appropriate training strategies in different code preference alignment scenarios.
Similar Papers
Improving LLM Safety and Helpfulness using SFT and DPO: A Study on OPT-350M
Computation and Language
Makes AI safer and more helpful.
DPO-F+: Aligning Code Repair Feedback with Developers' Preferences
Software Engineering
Helps computers explain code fixes to people.
Beyond Single: A Data Selection Principle for LLM Alignment via Fine-Grained Preference Signals
Machine Learning (CS)
Teaches AI to follow many different rules better.