ReasonEmbed: Enhanced Text Embeddings for Reasoning-Intensive Document Retrieval
By: Jianlyu Chen , Junwei Lan , Chaofan Li and more
Potential Business Impact:
Helps computers find answers by understanding complex questions.
In this paper, we introduce ReasonEmbed, a novel text embedding model developed for reasoning-intensive document retrieval. Our work includes three key technical contributions. First, we propose ReMixer, a new data synthesis method that overcomes the triviality problem prevalent in previous synthetic datasets, enabling large-scale production of 82K high-quality training samples. Second, we design Redapter, a self-adaptive learning algorithm that dynamically adjusts training each sample's weight based on its reasoning intensity. This allows the model to effectively capture the complex semantic relationships between queries and documents. Third, we implement ReasonEmbed across multiple backbones of varying sizes, all of which achieve superior performance on reasoning-intensive retrieval tasks. Notably, our ReasonEmbed-Qwen3-8B model offers a record-high nDCG@10 score of 38.1 on the BRIGHT benchmark, which significantly outperforms existing text embedding models. We will fully open-source our created resources in ReasonEmbed to push forward the research advancement in this field.
Similar Papers
Large Reasoning Embedding Models: Towards Next-Generation Dense Retrieval Paradigm
Information Retrieval
Helps online shoppers find products even with tricky searches.
Large Reasoning Embedding Models: Towards Next-Generation Dense Retrieval Paradigm
Information Retrieval
Helps online stores find products for tricky searches.
Exploring Reasoning-Infused Text Embedding with Large Language Models for Zero-Shot Dense Retrieval
Computation and Language
Helps computers understand text by thinking like people.