Score: 0

Learn Before Represent: Bridging Generative and Contrastive Learning for Domain-Specific LLM Embeddings

Published: January 16, 2026 | arXiv ID: 2601.11124v1

By: Xiaoyu Liang , Yuchen Peng , Jiale Luo and more

Potential Business Impact:

Teaches computers specialized knowledge for better answers.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large Language Models (LLMs) adapted via contrastive learning excel in general representation learning but struggle in vertical domains like chemistry and law, primarily due to a lack of domain-specific knowledge. This work identifies a core bottleneck: the prevailing ``LLM+CL'' paradigm focuses on semantic alignment but cannot perform knowledge acquisition, leading to failures on specialized terminology. To bridge this gap, we propose Learn Before Represent (LBR), a novel two-stage framework. LBR first injects domain knowledge via an Information Bottleneck-Constrained Generative Learning stage, preserving the LLM's causal attention to maximize knowledge acquisition while compressing semantics. It then performs Generative-Refined Contrastive Learning on the compressed representations for alignment. This approach maintains architectural consistency and resolves the objective conflict between generative and contrastive learning. Extensive experiments on medical, chemistry, and code retrieval tasks show that LBR significantly outperforms strong baselines. Our work establishes a new paradigm for building accurate and robust representations in vertical domains.

Country of Origin
🇨🇳 China

Page Count
10 pages

Category
Computer Science:
Information Retrieval