Multitask finetuning and acceleration of chemical pretrained models for small molecule drug property prediction
By: Matthew Adrian , Yunsie Chung , Kevin Boyd and more
Potential Business Impact:
Finds new medicines faster.
Chemical pretrained models, sometimes referred to as foundation models, are receiving considerable interest for drug discovery applications. The general chemical knowledge extracted from self-supervised training has the potential to improve predictions for critical drug discovery endpoints, including on-target potency and ADMET properties. Multi-task learning has previously been successfully leveraged to improve predictive models. Here, we show that enabling multitasking in finetuning of chemical pretrained graph neural network models such as Kinetic GROVER Multi-Task (KERMT), an enhanced version of the GROVER model, and Knowledge-guided Pre-training of Graph Transformer (KGPT) significantly improves performance over non-pretrained graph neural network models. Surprisingly, we find that the performance improvement from finetuning KERMT in a multitask manner is most significant at larger data sizes. Additionally, we publish two multitask ADMET data splits to enable more accurate benchmarking of multitask deep learning methods for drug property prediction. Finally, we provide an accelerated implementation of the KERMT model on GitHub, unlocking large-scale pretraining, finetuning, and inference in industrial drug discovery workflows.
Similar Papers
All You Need Is Synthetic Task Augmentation
Machine Learning (CS)
Teaches computers to guess molecule traits better.
BioMedGPT-Mol: Multi-task Learning for Molecular Understanding and Generation
Artificial Intelligence
Teaches computers to invent new medicines.
Task-Specific Sparse Feature Masks for Molecular Toxicity Prediction with Chemical Language Models
Computational Engineering, Finance, and Science
Shows drug parts that make them safe or unsafe.