Sem-DPO: Mitigating Semantic Inconsistency in Preference Optimization for Prompt Engineering
By: Anas Mohamed , Azal Ahmad Khan , Xinran Wang and more
Potential Business Impact:
Makes AI art match your exact words.
Generative AI can now synthesize strikingly realistic images from text, yet output quality remains highly sensitive to how prompts are phrased. Direct Preference Optimization (DPO) offers a lightweight, off-policy alternative to RL for automatic prompt engineering, but its token-level regularization leaves semantic inconsistency unchecked as prompts that win higher preference scores can still drift away from the user's intended meaning. We introduce Sem-DPO, a variant of DPO that preserves semantic consistency yet retains its simplicity and efficiency. Sem-DPO scales the DPO loss by an exponential weight proportional to the cosine distance between the original prompt and winning candidate in embedding space, softly down-weighting training signals that would otherwise reward semantically mismatched prompts. We provide the first analytical bound on semantic drift for preference-tuned prompt generators, showing that Sem-DPO keeps learned prompts within a provably bounded neighborhood of the original text. On three standard text-to-image prompt-optimization benchmarks and two language models, Sem-DPO achieves 8-12% higher CLIP similarity and 5-9% higher human-preference scores (HPSv2.1, PickScore) than DPO, while also outperforming state-of-the-art baselines. These findings suggest that strong flat baselines augmented with semantic weighting should become the new standard for prompt-optimization studies and lay the groundwork for broader, semantics-aware preference optimization in language models.
Similar Papers
Sem-DPO: Mitigating Semantic Inconsistency in Preference Optimization for Prompt Engineering
Computation and Language
Makes AI art match your words better.
DSPO: Direct Semantic Preference Optimization for Real-World Image Super-Resolution
CV and Pattern Recognition
Makes blurry pictures clear, just how you like them.
Ambiguity Awareness Optimization: Towards Semantic Disambiguation for Direct Preference Optimization
Computation and Language
Makes AI understand instructions better by ignoring confusing parts.