HELM-BERT: A Transformer for Medium-sized Peptide Property Prediction
By: Seungeon Lee , Takuto Koyama , Itsuki Maeda and more
Therapeutic peptides have emerged as a pivotal modality in modern drug discovery, occupying a chemically and topologically rich space. While accurate prediction of their physicochemical properties is essential for accelerating peptide development, existing molecular language models rely on representations that fail to capture this complexity. Atom-level SMILES notation generates long token sequences and obscures cyclic topology, whereas amino-acid-level representations cannot encode the diverse chemical modifications central to modern peptide design. To bridge this representational gap, the Hierarchical Editing Language for Macromolecules (HELM) offers a unified framework enabling precise description of both monomer composition and connectivity, making it a promising foundation for peptide language modeling. Here, we propose HELM-BERT, the first encoder-based peptide language model trained on HELM notation. Based on DeBERTa, HELM-BERT is specifically designed to capture hierarchical dependencies within HELM sequences. The model is pre-trained on a curated corpus of 39,079 chemically diverse peptides spanning linear and cyclic structures. HELM-BERT significantly outperforms state-of-the-art SMILES-based language models in downstream tasks, including cyclic peptide membrane permeability prediction and peptide-protein interaction prediction. These results demonstrate that HELM's explicit monomer- and topology-aware representations offer substantial data-efficiency advantages for modeling therapeutic peptides, bridging a long-standing gap between small-molecule and protein language models.
Similar Papers
Enhancing TCR-Peptide Interaction Prediction with Pretrained Language Models and Molecular Representations
Quantitative Methods
Helps doctors create better cancer treatments.
Thinking like a CHEMIST: Combined Heterogeneous Embedding Model Integrating Structure and Tokens
Machine Learning (CS)
Helps computers understand chemicals better for new medicines.
SmilesT5: Domain-specific pretraining for molecular language models
Machine Learning (CS)
Teaches computers to guess drug effects faster.