Negative Matters: Multi-Granularity Hard-Negative Synthesis and Anchor-Token-Aware Pooling for Enhanced Text Embeddings
By: Tengyu Pan , Zhichao Duan , Zhenyu Li and more
Potential Business Impact:
Makes computers understand words better.
Text embedding models are essential for various natural language processing tasks, enabling the effective encoding of semantic information into dense vector representations. These models are typically optimized using triplets of (query, positive, negative) data pairs for contrastive learning, where the negative samples play a critical role in enhancing the model's ability to discern subtle semantic distinctions. In this work, we introduce a Multi-Granularity Hard-negative (MGH) synthesis framework that leverages large language models (LLMs) to generate diverse negative samples with varying levels of similarity with the query. This approach facilitates a coarse-to-fine curriculum learning strategy during supervised training, allowing the embedding model to progressively learn more nuanced semantic representations. Meanwhile, we propose an Anchor Token Aware (ATA) pooling method that assigns higher weights to anchor tokens based on aggregation patterns observed in LLMs, improving text embedding accuracy without increasing model complexity. Comprehensive experiments on the MTEB benchmark demonstrate that our methods achieve state-of-the-art performance, surpassing existing synthesis strategies both with synthetic data and when combined with public retrieval datasets.
Similar Papers
Improve Multi-Modal Embedding Learning via Explicit Hard Negative Gradient Amplifying
CV and Pattern Recognition
Teaches computers to better understand pictures and words.
LGAI-EMBEDDING-Preview Technical Report
Computation and Language
Helps computers understand text for many jobs.
PGMEL: Policy Gradient-based Generative Adversarial Network for Multimodal Entity Linking
Computation and Language
Helps computers understand pictures and words together.