ChEmbed: Enhancing Chemical Literature Search Through Domain-Specific Text Embeddings
By: Ali Shiraee Kasmaee , Mohammad Khodadad , Mehdi Astaraki and more
Potential Business Impact:
Helps scientists find chemistry facts faster online.
Retrieval-Augmented Generation (RAG) systems in chemistry heavily depend on accurate and relevant retrieval of chemical literature. However, general-purpose text embedding models frequently fail to adequately represent complex chemical terminologies, resulting in suboptimal retrieval quality. Specialized embedding models tailored to chemical literature retrieval have not yet been developed, leaving a substantial performance gap. To address this challenge, we introduce ChEmbed, a domain-adapted family of text embedding models fine-tuned on a dataset comprising chemistry-specific text from the PubChem, Semantic Scholar, and ChemRxiv corpora. To create effective training data, we employ large language models to synthetically generate queries, resulting in approximately 1.7 million high-quality query-passage pairs. Additionally, we augment the tokenizer by adding 900 chemically specialized tokens to previously unused slots, which significantly reduces the fragmentation of chemical entities, such as IUPAC names. ChEmbed also maintains a 8192-token context length, enabling the efficient retrieval of longer passages compared to many other open-source embedding models, which typically have a context length of 512 or 2048 tokens. Evaluated on our newly introduced ChemRxiv Retrieval benchmark, ChEmbed outperforms state-of-the-art general embedding models, raising nDCG@10 from 0.82 to 0.91 (+9 pp). ChEmbed represents a practical, lightweight, and reproducible embedding solution that effectively improves retrieval for chemical literature search.
Similar Papers
Enhancing Technical Documents Retrieval for RAG
Information Retrieval
Finds answers in tech manuals faster.
Advancing Retrieval-Augmented Generation for Structured Enterprise and Internal Data
Computation and Language
Helps computers understand company secrets better.
Adaptation of Embedding Models to Financial Filings via LLM Distillation
Computation and Language
Teaches AI to find specific money information faster.