Score: 0

The Tokenization Bottleneck: How Vocabulary Extension Improves Chemistry Representation Learning in Pretrained Language Models

Published: November 18, 2025 | arXiv ID: 2511.14365v1

By: Prathamesh Kalamkar , Ned Letcher , Meissane Chami and more

Potential Business Impact:

Teaches computers to understand and create new medicines.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

The application of large language models (LLMs) to chemistry is frequently hampered by a "tokenization bottleneck", where tokenizers tuned on general-domain text tend to fragment chemical representations such as SMILES into semantically uninformative sub-tokens. This paper introduces a principled methodology to resolve this bottleneck by unifying the representation of natural language and molecular structures within a single model. Our approach involves targeted vocabulary extension-augmenting a pretrained LLM's vocabulary with chemically salient tokens, followed by continued pretraining on chemistry-domain text to integrate this new knowledge. We provide an empirical demonstration of the effectiveness of this strategy, showing that our methodology leads to superior performance on a range of downstream chemical tasks.

Page Count
10 pages

Category
Computer Science:
Computation and Language