Enhancing Small LLM Alignment through Margin-Based Objective Modifications under Resource Constraints
By: Daren Yao, Jinsong Yuan, Ruike Chen
Potential Business Impact:
Makes small AI understand what people want better.
Small large language models (LLMs) often face difficulties in aligning output to human preferences, particularly when operating under severe performance gaps. In this work, we propose two lightweight DPO-based variants -- Adaptive Margin-Sigmoid Loss and APO-hinge-zero -- to better address underperformance scenarios by introducing margin-based objectives and selective update mechanisms. Our APO-hinge-zero method, which combines hinge-induced hard-example mining with the chosen-focused optimization of APO-zero, achieves strong results. In AlpacaEval, APO-hinge-zero improves the win rate by +2.0 points and the length-controlled win rate by +1.4 points compared to the APO-zero baseline. In MT-Bench, our methods maintain competitive performance in diverse categories, particularly excelling in STEM and Humanities tasks. These results demonstrate that simple modifications to preference-based objectives can significantly enhance small LLM alignment under resource constraints, offering a practical path toward more efficient deployment.
Similar Papers
AMaPO: Adaptive Margin-attached Preference Optimization for Language Model Alignment
Computation and Language
Teaches AI to learn better from ranked choices.
Robust Preference Optimization via Dynamic Target Margins
Computation and Language
Makes AI smarter and safer by fixing bad training data.
Beyond Single: A Data Selection Principle for LLM Alignment via Fine-Grained Preference Signals
Machine Learning (CS)
Teaches AI to follow many different rules better.