Training Text-to-Molecule Models with Context-Aware Tokenization
By: Seojin Kim , Hyeontae Song , Jaehyun Nam and more
Potential Business Impact:
Designs new medicines faster by understanding molecule shapes.
Recently, text-to-molecule models have shown great potential across various chemical applications, e.g., drug-discovery. These models adapt language models to molecular data by representing molecules as sequences of atoms. However, they rely on atom-level tokenizations, which primarily focus on modeling local connectivity, thereby limiting the ability of models to capture the global structural context within molecules. To tackle this issue, we propose a novel text-to-molecule model, coined Context-Aware Molecular T5 (CAMT5). Inspired by the significance of the substructure-level contexts in understanding molecule structures, e.g., ring systems, we introduce substructure-level tokenization for text-to-molecule models. Building on our tokenization scheme, we develop an importance-based training strategy that prioritizes key substructures, enabling CAMT5 to better capture the molecular semantics. Extensive experiments verify the superiority of CAMT5 in various text-to-molecule generation tasks. Intriguingly, we find that CAMT5 outperforms the state-of-the-art methods using only 2% of training tokens. In addition, we propose a simple yet effective ensemble strategy that aggregates the outputs of text-to-molecule models to further boost the generation performance. Code is available at https://github.com/Songhyeontae/CAMT5.git.
Similar Papers
GraphT5: Unified Molecular Graph-Language Modeling via Multi-Modal Cross-Token Attention
Machine Learning (CS)
Helps computers understand molecules better for new drugs.
Bridging Molecular Graphs and Large Language Models
Machine Learning (CS)
Lets computers understand chemical structures like words.
ProtTeX: Structure-In-Context Reasoning and Editing of Proteins with Large Language Models
Biomolecules
Helps computers understand and design proteins by reading their shapes.