Learn Before Represent: Bridging Generative and Contrastive Learning for Domain-Specific LLM Embeddings
By: Xiaoyu Liang , Yuchen Peng , Jiale Luo and more
Potential Business Impact:
Teaches computers specialized knowledge for better answers.
Large Language Models (LLMs) adapted via contrastive learning excel in general representation learning but struggle in vertical domains like chemistry and law, primarily due to a lack of domain-specific knowledge. This work identifies a core bottleneck: the prevailing ``LLM+CL'' paradigm focuses on semantic alignment but cannot perform knowledge acquisition, leading to failures on specialized terminology. To bridge this gap, we propose Learn Before Represent (LBR), a novel two-stage framework. LBR first injects domain knowledge via an Information Bottleneck-Constrained Generative Learning stage, preserving the LLM's causal attention to maximize knowledge acquisition while compressing semantics. It then performs Generative-Refined Contrastive Learning on the compressed representations for alignment. This approach maintains architectural consistency and resolves the objective conflict between generative and contrastive learning. Extensive experiments on medical, chemistry, and code retrieval tasks show that LBR significantly outperforms strong baselines. Our work establishes a new paradigm for building accurate and robust representations in vertical domains.
Similar Papers
Leveraging Domain Knowledge at Inference Time for LLM Translation: Retrieval versus Generation
Computation and Language
Helps computers translate tricky medical and legal words.
Scaling Language-Centric Omnimodal Representation Learning
Computation and Language
Makes computers understand pictures and words better.
Adaptation of Embedding Models to Financial Filings via LLM Distillation
Computation and Language
Teaches AI to find specific money information faster.