The Tokenization Bottleneck: How Vocabulary Extension Improves Chemistry Representation Learning in Pretrained Language Models
By: Prathamesh Kalamkar , Ned Letcher , Meissane Chami and more
Potential Business Impact:
Teaches computers to understand and create new medicines.
The application of large language models (LLMs) to chemistry is frequently hampered by a "tokenization bottleneck", where tokenizers tuned on general-domain text tend to fragment chemical representations such as SMILES into semantically uninformative sub-tokens. This paper introduces a principled methodology to resolve this bottleneck by unifying the representation of natural language and molecular structures within a single model. Our approach involves targeted vocabulary extension-augmenting a pretrained LLM's vocabulary with chemically salient tokens, followed by continued pretraining on chemistry-domain text to integrate this new knowledge. We provide an empirical demonstration of the effectiveness of this strategy, showing that our methodology leads to superior performance on a range of downstream chemical tasks.
Similar Papers
NovoMolGen: Rethinking Molecular Language Model Pretraining
Machine Learning (CS)
Creates new medicines faster and better.
NovoMolGen: Rethinking Molecular Language Model Pretraining
Machine Learning (CS)
Finds new medicines faster by reading molecule language.
Teaching Old Tokenizers New Words: Efficient Tokenizer Adaptation for Pre-trained Models
Computation and Language
Makes computer language models understand new words better.