Score: 1

Multitask finetuning and acceleration of chemical pretrained models for small molecule drug property prediction

Published: October 14, 2025 | arXiv ID: 2510.12719v1

By: Matthew Adrian , Yunsie Chung , Kevin Boyd and more

BigTech Affiliations: NVIDIA

Potential Business Impact:

Finds new medicines faster.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Chemical pretrained models, sometimes referred to as foundation models, are receiving considerable interest for drug discovery applications. The general chemical knowledge extracted from self-supervised training has the potential to improve predictions for critical drug discovery endpoints, including on-target potency and ADMET properties. Multi-task learning has previously been successfully leveraged to improve predictive models. Here, we show that enabling multitasking in finetuning of chemical pretrained graph neural network models such as Kinetic GROVER Multi-Task (KERMT), an enhanced version of the GROVER model, and Knowledge-guided Pre-training of Graph Transformer (KGPT) significantly improves performance over non-pretrained graph neural network models. Surprisingly, we find that the performance improvement from finetuning KERMT in a multitask manner is most significant at larger data sizes. Additionally, we publish two multitask ADMET data splits to enable more accurate benchmarking of multitask deep learning methods for drug property prediction. Finally, we provide an accelerated implementation of the KERMT model on GitHub, unlocking large-scale pretraining, finetuning, and inference in industrial drug discovery workflows.

Country of Origin
🇺🇸 United States

Page Count
27 pages

Category
Computer Science:
Machine Learning (CS)