Score: 0

De Novo Molecular Design Enabled by Direct Preference Optimization and Curriculum Learning

Published: April 2, 2025 | arXiv ID: 2504.01389v1

By: Junyu Hou

Potential Business Impact:

Finds new medicines faster and cheaper.

Business Areas:
Bioinformatics Biotechnology, Data and Analytics, Science and Engineering

De novo molecular design has extensive applications in drug discovery and materials science. The vast chemical space renders direct molecular searches computationally prohibitive, while traditional experimental screening is both time- and labor-intensive. Efficient molecular generation and screening methods are therefore essential for accelerating drug discovery and reducing costs. Although reinforcement learning (RL) has been applied to optimize molecular properties via reward mechanisms, its practical utility is limited by issues in training efficiency, convergence, and stability. To address these challenges, we adopt Direct Preference Optimization (DPO) from NLP, which uses molecular score-based sample pairs to maximize the likelihood difference between high- and low-quality molecules, effectively guiding the model toward better compounds. Moreover, integrating curriculum learning further boosts training efficiency and accelerates convergence. A systematic evaluation of the proposed method on the GuacaMol Benchmark yielded excellent scores. For instance, the method achieved a score of 0.883 on the Perindopril MPO task, representing a 6\% improvement over competing models. And subsequent target protein binding experiments confirmed its practical efficacy. These results demonstrate the strong potential of DPO for molecular design tasks and highlight its effectiveness as a robust and efficient solution for data-driven drug discovery.

Country of Origin
🇨🇳 China

Page Count
17 pages

Category
Computer Science:
Machine Learning (CS)