SemPA: Improving Sentence Embeddings of Large Language Models through Semantic Preference Alignment
By: Ziyang Chen , Zhenxuan Huang , Yile Wang and more
Potential Business Impact:
Makes AI understand sentences better without losing its voice.
Traditional sentence embedding methods employ token-level contrastive learning on non-generative pre-trained models. Recently, there have emerged embedding methods based on generative large language models (LLMs). These methods either rely on fixed prompt templates or involve modifications to the model architecture. The former lacks further optimization of the model and results in limited performance, while the latter alters the internal computational mechanisms of the model, thereby compromising its generative capabilities. We propose SemPA, a novel approach that boosts the sentence representations while preserving the generative ability of LLMs via semantic preference alignment. We leverage sentence-level Direct Preference Optimization (DPO) to efficiently optimize LLMs on a paraphrase generation task, where the model learns to discriminate semantically equivalent sentences while preserving inherent generative capacity. Theoretically, we establish a formal connection between DPO and contrastive learning under the Plackett-Luce model framework. Empirically, experimental results on both semantic textual similarity tasks and various benchmarks for LLMs show that SemPA achieves better semantic representations without sacrificing the inherent generation capability of LLMs.
Similar Papers
Improving LLMs for Machine Translation Using Synthetic Preference Data
Computation and Language
Makes computer translations much better and more accurate.
Sem-DPO: Mitigating Semantic Inconsistency in Preference Optimization for Prompt Engineering
Computation and Language
Makes AI art match your exact words.
GEM: Generative Entropy-Guided Preference Modeling for Few-shot Alignment of LLMs
Artificial Intelligence
Teaches AI to learn from expert opinions.