Score: 1

Enhancing Small LLM Alignment through Margin-Based Objective Modifications under Resource Constraints

Published: August 11, 2025 | arXiv ID: 2508.08466v1

By: Daren Yao, Jinsong Yuan, Ruike Chen

Potential Business Impact:

Makes small AI understand what people want better.

Small large language models (LLMs) often face difficulties in aligning output to human preferences, particularly when operating under severe performance gaps. In this work, we propose two lightweight DPO-based variants -- Adaptive Margin-Sigmoid Loss and APO-hinge-zero -- to better address underperformance scenarios by introducing margin-based objectives and selective update mechanisms. Our APO-hinge-zero method, which combines hinge-induced hard-example mining with the chosen-focused optimization of APO-zero, achieves strong results. In AlpacaEval, APO-hinge-zero improves the win rate by +2.0 points and the length-controlled win rate by +1.4 points compared to the APO-zero baseline. In MT-Bench, our methods maintain competitive performance in diverse categories, particularly excelling in STEM and Humanities tasks. These results demonstrate that simple modifications to preference-based objectives can significantly enhance small LLM alignment under resource constraints, offering a practical path toward more efficient deployment.

Country of Origin
🇺🇸 United States

Page Count
10 pages

Category
Computer Science:
Computation and Language