Score: 1

ModuLM: Enabling Modular and Multimodal Molecular Relational Learning with Large Language Models

Published: June 1, 2025 | arXiv ID: 2506.00880v1

By: Zhuo Chen , Yizhen Zheng , Huan Yee Koh and more

Potential Business Impact:

Builds better computer models for drug discovery.

Business Areas:
Bioinformatics Biotechnology, Data and Analytics, Science and Engineering

Molecular Relational Learning (MRL) aims to understand interactions between molecular pairs, playing a critical role in advancing biochemical research. With the recent development of large language models (LLMs), a growing number of studies have explored the integration of MRL with LLMs and achieved promising results. However, the increasing availability of diverse LLMs and molecular structure encoders has significantly expanded the model space, presenting major challenges for benchmarking. Currently, there is no LLM framework that supports both flexible molecular input formats and dynamic architectural switching. To address these challenges, reduce redundant coding, and ensure fair model comparison, we propose ModuLM, a framework designed to support flexible LLM-based model construction and diverse molecular representations. ModuLM provides a rich suite of modular components, including 8 types of 2D molecular graph encoders, 11 types of 3D molecular conformation encoders, 7 types of interaction layers, and 7 mainstream LLM backbones. Owing to its highly flexible model assembly mechanism, ModuLM enables the dynamic construction of over 50,000 distinct model configurations. In addition, we provide comprehensive results to demonstrate the effectiveness of ModuLM in supporting LLM-based MRL tasks.

Country of Origin
🇨🇳 🇦🇺 China, Australia

Page Count
22 pages

Category
Computer Science:
Machine Learning (CS)