Score: 1

Sem-DPO: Mitigating Semantic Inconsistency in Preference Optimization for Prompt Engineering

Published: July 27, 2025 | arXiv ID: 2507.20133v1

By: Anas Mohamed , Azal Ahmad Khan , Xinran Wang and more

Potential Business Impact:

Makes AI art match your exact words.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Generative AI can now synthesize strikingly realistic images from text, yet output quality remains highly sensitive to how prompts are phrased. Direct Preference Optimization (DPO) offers a lightweight, off-policy alternative to RL for automatic prompt engineering, but its token-level regularization leaves semantic inconsistency unchecked as prompts that win higher preference scores can still drift away from the user's intended meaning. We introduce Sem-DPO, a variant of DPO that preserves semantic consistency yet retains its simplicity and efficiency. Sem-DPO scales the DPO loss by an exponential weight proportional to the cosine distance between the original prompt and winning candidate in embedding space, softly down-weighting training signals that would otherwise reward semantically mismatched prompts. We provide the first analytical bound on semantic drift for preference-tuned prompt generators, showing that Sem-DPO keeps learned prompts within a provably bounded neighborhood of the original text. On three standard text-to-image prompt-optimization benchmarks and two language models, Sem-DPO achieves 8-12% higher CLIP similarity and 5-9% higher human-preference scores (HPSv2.1, PickScore) than DPO, while also outperforming state-of-the-art baselines. These findings suggest that strong flat baselines augmented with semantic weighting should become the new standard for prompt-optimization studies and lay the groundwork for broader, semantics-aware preference optimization in language models.

Country of Origin
🇵🇰 🇺🇸 United States, Pakistan

Page Count
13 pages

Category
Computer Science:
Computation and Language